149 resultados para Hospital Information Systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important feature of some conceptual modelling grammars is the features they provide to allow database designers to show real-world things may or may not possess a particular attribute or relationship. In the entity-relationship model, for example, the fact that a thing may not possess an attribute can be represented by using a special symbol to indicate that the attribute is optional. Similarly, the fact that a thing may or may not be involved in a relationship can be represented by showing the minimum cardinality of the relationship as zero. Whether these practices should be followed, however, is a contentious issue. An alternative approach is to eliminate optional attributes and relationships from conceptual schema diagrams by using subtypes that have only mandatory attributes and relationships. In this paper, we first present a theory that led us to predict that optional attributes and relationships should be used in conceptual schema diagrams only when users of the diagrams require a surface-level understanding of the domain being represented by the diagrams. When users require a deep-level understanding, however, optional attributes and relationships should not be used because they undermine users' abilities to grasp important domain semantics. We describe three experiments which we then undertook to test our predictions. The results of the experiments support our predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the proliferation of relational database programs for PC's and other platforms, many business end-users are creating, maintaining, and querying their own databases. More importantly, business end-users use the output of these queries as the basis for operational, tactical, and strategic decisions. Inaccurate data reduce the expected quality of these decisions. Implementing various input validation controls, including higher levels of normalisation, can reduce the number of data anomalies entering the databases. Even in well-maintained databases, however, data anomalies will still accumulate. To improve the quality of data, databases can be queried periodically to locate and correct anomalies. This paper reports the results of two experiments that investigated the effects of different data structures on business end-users' abilities to detect data anomalies in a relational database. The results demonstrate that both unnormalised and higher levels of normalisation lower the effectiveness and efficiency of queries relative to the first normal form. First normal form databases appear to provide the most effective and efficient data structure for business end-users formulating queries to detect data anomalies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To assess consent to record linkage, describe the characteristics of consenters and compare self-report versus Medicare records of general practitioner use. Method. Almost 40,000 women in the Australian Longitudinal Study on Women's Health were sent a request by mail for permission to link their Medicare records and survey data. Results: 19,700 women consented: 37% of young (18-23 years), 59% of mid-age (4550 years) and 53% of older women (70-75 years). Consenters tended to have higher levels of education and, among the older cohort, were in better health than nonconsenters. Women tended to under-report the number of visits to general practitioners. Conclusions: Record linkage of survey and Medicare data on a large scale is feasible. The linked data provide information on health and socio-economic status which are valuable for understanding health service utilisation. Implications: Linked records provide a powerful tool for health care research, particularly in longitudinal studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Medication data retrieved from Australian Repatriation Pharmaceutical Benefits Scheme (RPBS) claims for 44 veterans residing in nursing homes and Pharmaceutical Benefits Scheme (PBS) claims for 898 nursing home residents were compared with medication data from nursing home records to determine the optimal time interval for retrieving claims data and its validity. Optimal matching was achieved using 12 weeks of RPBS claims data, with 60% of medications in the RPBS claims located in nursing home administration records, and 78% of medications administered to nursing home residents identified in RPBS claims. In comparison, 48% of medications administered to nursing home residents could be found in 12 weeks of PBS data, and 56% of medications present in PBS claims could be matched with nursing home administration records. RPBS claims data was superior to PBS, due to the larger number of scheduled items available to veterans and the veteran's file number, which acts as a unique identifier. These findings should be taken into account when using prescription claims data for medication histories, prescriber feedback, drug utilisation, intervention or epidemiological studies. (C) 2001 Elsevier Science Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geographical information systems (GIS) coupled to 3D visualisation technology is an emerging tool for urban planning and landscape design applications. The utility of 3D GIS for realistically visualising the built environment and proposed development scenarios is much advocated in the literature. Planners assess the merits of proposed changes using visual impact assessment (VIA). We have used Arcview GIS and visualisation software: called PolyTRIM from the University of Toronto, Centre for Landscape Research (CLR) to create a 3D scene for the entrance to a University campus. The paper investigates the thesis that to facilitate VIA in planning and design requires not only visualisation, but also a structured evaluation technique (Delphi) to arbitrate the decision-making process. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been argued that beyond software engineering and process engineering, ontological engineering is the third capability needed if successful e-commerce is to be realized. In our experience of building an ontological-based tendering system, we face the problem of building an ontology. In this paper, we demonstrate how to build ontologies in the tendering domain. The ontology life cycle is identified. Extracting concepts from existing resources like on-line catalogs is described. We have reused electronic data interchange (EDI) to build conceptual structures in the tendering domain. An algorithm to extract abstract ontological concepts from these structures is proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eukaryotic phenotypic diversity arises from multitasking of a core proteome of limited size. Multitasking is routine in computers, as well as in other sophisticated information systems, and requires multiple inputs and outputs to control and integrate network activity. Higher eukaryotes have a mosaic gene structure with a dual output, mRNA (protein-coding) sequences and introns, which are released from the pre-mRNA by posttranscriptional processing. Introns have been enormously successful as a class of sequences and comprise up to 95% of the primary transcripts of protein-coding genes in mammals. In addition, many other transcripts (perhaps more than half) do not encode proteins at all, but appear both to be developmentally regulated and to have genetic function. We suggest that these RNAs (eRNAs) have evolved to function as endogenous network control molecules which enable direct gene-gene communication and multitasking of eukaryotic genomes. Analysis of a range of complex genetic phenomena in which RNA is involved or implicated, including co-suppression, transgene silencing, RNA interference, imprinting, methylation, and transvection, suggests that a higher-order regulatory system based on RNA signals operates in the higher eukaryotes and involves chromatin remodeling as well as other RNA-DNA, RNA-RNA, and RNA-protein interactions. The evolution of densely connected gene networks would be expected to result in a relatively stable core proteome due to the multiple reuse of components, implying,that cellular differentiation and phenotypic variation in the higher eukaryotes results primarily from variation in the control architecture. Thus, network integration and multitasking using trans-acting RNA molecules produced in parallel with protein-coding sequences may underpin both the evolution of developmentally sophisticated multicellular organisms and the rapid expansion of phenotypic complexity into uncontested environments such as those initiated in the Cambrian radiation and those seen after major extinction events.