140 resultados para 350202 Business Information Systems (incl. Data Processing)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land-surface processes include a broad class of models that operate at a landscape scale. Current modelling approaches tend to be specialised towards one type of process, yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management. This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as water flow, biochemical diffusion, and plant dispersal. Its theoretical development applies a Lagrangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique, called level set method, is implemented in a geographical information system (GIS). It fits with a field data model in GIS and is implemented as operators in map algebra. The paper describes an implementation of the level set methods in a map algebra programming language, called MapScript, and gives example program scripts for applications in ecology and hydrology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medication data retrieved from Australian Repatriation Pharmaceutical Benefits Scheme (RPBS) claims for 44 veterans residing in nursing homes and Pharmaceutical Benefits Scheme (PBS) claims for 898 nursing home residents were compared with medication data from nursing home records to determine the optimal time interval for retrieving claims data and its validity. Optimal matching was achieved using 12 weeks of RPBS claims data, with 60% of medications in the RPBS claims located in nursing home administration records, and 78% of medications administered to nursing home residents identified in RPBS claims. In comparison, 48% of medications administered to nursing home residents could be found in 12 weeks of PBS data, and 56% of medications present in PBS claims could be matched with nursing home administration records. RPBS claims data was superior to PBS, due to the larger number of scheduled items available to veterans and the veteran's file number, which acts as a unique identifier. These findings should be taken into account when using prescription claims data for medication histories, prescriber feedback, drug utilisation, intervention or epidemiological studies. (C) 2001 Elsevier Science Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many years in the area of business systems analysis and design, practitioners and researchers alike have been searching for some comprehensive basis on which to evaluate, compare, and engineer techniques that are promoted for use in the modelling of systems' requirements. To date, while many frameworks, factors, and facets have been forthcoming, none appear to be based on a sound theory. In light of this dilemma, over the last 10 years, attention has been devoted by researchers to the use of ontology to provide some theoretical basis for the advancement of the business systems modelling discipline. This paper outlines how we have used a particular ontology for this purpose over the last five years. In particular we have learned that the understandability and the applicability of the selected ontology must be clear for IS professionals, the results of any ontological evaluation must be tempered by economic efficiency considerations of the stakeholders involved, and ontologies may have to be focused for the business purpose and type of user involved in the modelling situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because organizations are making large investments in Information systems (IS), efficient IS project management has been found critical to success. This study examines how the use of incentives can improve the project success. Agency theory is used to: identify motivational factors of project success, help the IS owners to understand to what extent management incentives can improve IS development and implementation (ISD/I). The outcomes will help practitioners and researchers to build on theoretical model of project management elements which lead to project success. Given the principal-agent nature of most significant scale of IS development, insights that will allow for greater alignment of the agent’s goals with those of the principal through incentive contracts, will serve to make ISD/I both more efficient and more effective, leading to more successful IS projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.