817 resultados para Dynamically Adapted Information Systems
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
Arizona Department of Transportation, Phoenix
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
"B-284472"--P. 3.
Resumo:
A joint project of the Chief Financial Officers Council and the Joint Financial Management Improvement Program.
Resumo:
Purpose: The aim of this project was to design and evaluate a system that would produce tailored information for stroke patients and their carers, customised according to their informational needs, and facilitate communication between the patient and, health professional. Method: A human factors development approach was used to develop a computer system, which dynamically compiles stroke education booklets for patients and carers. Patients and carers are able to select the topics about which they wish to receive information, the amount of information they want, and the font size of the printed booklet. The system is designed so that the health professional interacts with it, thereby providing opportunities for communication between the health professional and patient/carer at a number of points in time. Results: Preliminary evaluation of the system by health professionals, patients and carers was positive. A randomised controlled trial that examines the effect of the system on patient and carer outcomes is underway. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Workflow technology has delivered effectively for a large class of business processes, providing the requisite control and monitoring functions. At the same time, this technology has been the target of much criticism due to its limited ability to cope with dynamically changing business conditions which require business processes to be adapted frequently, and/or its limited ability to model business processes which cannot be entirely predefined. Requirements indicate the need for generic solutions where a balance between process control and flexibility may be achieved. In this paper we present a framework that allows the workflow to execute on the basis of a partially specified model where the full specification of the model is made at runtime, and may be unique to each instance. This framework is based on the notion of process constraints. Where as process constraints may be specified for any aspect of the workflow, such as structural, temporal, etc. our focus in this paper is on a constraint which allows dynamic selection of activities for inclusion in a given instance. We call these cardinality constraints, and this paper will discuss their specification and validation requirements.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.
Resumo:
Purpose: The purpose of this paper is to investigate enterprise resource planning (ERP) systems development and emerging practices in the management of enterprises (i.e. parts of companies working with parts of other companies to deliver a complex product and/or service) and identify any apparent correlations. Suitable a priori contingency frameworks are then used and extended to explain apparent correlations. Discussion is given to provide guidance for researchers and practitioners to deliver better strategic, structural and operational competitive advantage through this approach; coined here as the "enterprization of operations". Design/methodology/approach: Theoretical induction uses a new empirical longitudinal case study from Zoomlion (a Chinese manufacturing company) built using an adapted form of template analysis to produce a new contingency framework. Findings: Three main types of enterprises and the three main types of ERP systems are defined and correlations between them are explained. Two relevant a priori frameworks are used to induct a new contingency model to support the enterprization of operations; known as the dynamic enterprise reference grid for ERP (DERG-ERP). Research limitations/implications: The findings are based on one longitudinal case study. Further case studies are currently being conducted in the UK and China. Practical implications: The new contingency model, the DERG-ERP, serves as a guide for ERP vendors, information systems management and operations managers hoping to grow and sustain their competitive advantage with respect to effective enterprise strategy, enterprise structure and ERP systems. Originality/value: This research explains how ERP systems and the effective management of enterprises should develop in order to sustain competitive advantage with respect to enterprise strategy, enterprise structure and ERP systems use. © Emerald Group Publishing Limited.
Resumo:
Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.
Resumo:
To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.