988 resultados para database design
Resumo:
In designing new product the ability to retrieve drawings of existing components is important if costs are to be controlled by preventing unnecessary duplication if parts. Component coding and classification systems have been used successfully for these purposes but suffer from high operational costs and poor usability arising directly from the manual nature of the coding process itself. A new version of an existing coding system (CAMAC) has been developed to reduce costs by automatically coding engineering drawings. Usability is improved be supporting searches based on a drawing or sketch of the desired component. Test results from a database of several thousand drawings are presented.
Resumo:
Chorismate mutase is one of the essential enzymes in the shikimate pathway and is key to the survival of the organism Mycobacterium tuberculosis. The x-ray crystal structure of this enzyme from Mycobacterium tuberculosis was manipulated to prepare an initial set of in silico protein models of the active site. Known inhibitors of the enzyme were docked into the active site using the flexible ligand / flexible active site side chains approach implemented in CAChe Worksystem (Fujitsu Ltd). The resulting complexes were refined by molecular dynamics studies in explicit water using Amber 9. This yielded a further set of protein models that were used for additional rounds of ligand docking. A binding hypothesis was established for the enzyme and this was used to screen a database of commercially available drug-like compounds. From these results new potential ligands were designed that fitted appropriately into the active site and matched the functional groups and binding motifs founds therein. Some of these compounds and close analogues were then synthesized and submitted for biological evaluation. As a separate part of this thesis, analogues of very active anti-tuberculosis pyridylcarboxamidrazone were also prepared. This was carried out by the addition and the deletion of the substitutions from the lead compound thereby preparing heteroaryl carboxamidrazone derivatives and related compounds. All these compounds were initially evaluated for biological activity against various gram positive organisms and then sent to the TAACF (USA) for screening against Mycobacterium tuberculosis. Some of the new compounds proved to be at least as potent as the original lead compound but less toxic.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
Computer-Based Learning systems of one sort or another have been in existence for almost 20 years, but they have yet to achieve real credibility within Commerce, Industry or Education. A variety of reasons could be postulated for this, typically: - cost - complexity - inefficiency - inflexibility - tedium Obviously different systems deserve different levels and types of criticism, but it still remains true that Computer-Based Learning (CBL) is falling significantly short of its potential. Experience of a small, but highly successful CBL system within a large, geographically distributed industry (the National Coal Board) prompted an investigation into currently available packages, the original intention being to purchase the most suitable software and run it on existing computer hardware, alongside existing software systems. It became apparent that none of the available CBL packages were suitable, and a decision was taken to develop an in-house Computer-Assisted Instruction system according to the following criteria: - cheap to run; - easy to author course material; - easy to use; - requires no computing knowledge to use (as either an author or student) ; - efficient in the use of computer resources; - has a comprehensive range of facilities at all levels. This thesis describes the initial investigation, resultant observations and the design, development and implementation of the SCHOOL system. One of the principal characteristics c£ SCHOOL is that it uses a hierarchical database structure for the storage of course material - thereby providing inherently a great deal of the power, flexibility and efficiency originally required. Trials using the SCHOOL system on IBM 303X series equipment are also detailed, along with proposed and current development work on what is essentially an operational CBL system within a large-scale Industrial environment.
Resumo:
This thesis presents a new approach to designing large organizational databases. The approach emphasizes the need for a holistic approach to the design process. The development of the proposed approach was based on a comprehensive examination of the issues of relevance to the design and utilization of databases. Such issues include conceptual modelling, organization theory, and semantic theory. The conceptual modelling approach presented in this thesis is developed over three design stages, or model perspectives. In the semantic perspective, concept definitions were developed based on established semantic principles. Such definitions rely on meaning - provided by intension and extension - to determine intrinsic conceptual definitions. A tool, called meaning-based classification (MBC), is devised to classify concepts based on meaning. Concept classes are then integrated using concept definitions and a set of semantic relations which rely on concept content and form. In the application perspective, relationships are semantically defined according to the application environment. Relationship definitions include explicit relationship properties and constraints. The organization perspective introduces a new set of relations specifically developed to maintain conformity of conceptual abstractions with the nature of information abstractions implied by user requirements throughout the organization. Such relations are based on the stratification of work hierarchies, defined elsewhere in the thesis. Finally, an example of an application of the proposed approach is presented to illustrate the applicability and practicality of the modelling approach.
Resumo:
Objective - To evaluate behavioural components and strategies associated with increased uptake and effectiveness of screening for coronary heart disease and diabetes with an implementation science focus. Design - Realist review. Data sources - PubMed, Web of Knowledge, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register and reference chaining. Searches limited to English language studies published since 1990. Eligibility criteria - Eligible studies evaluated interventions designed to increase the uptake of cardiovascular disease (CVD) and diabetes screening and examined behavioural and/or strategic designs. Studies were excluded if they evaluated changes in risk factors or cost-effectiveness only. Results - In 12 eligible studies, several different intervention designs and evidence-based strategies were evaluated. Salient themes were effects of feedback on behaviour change or benefits of health dialogues over simple feedback. Studies provide mixed evidence about the benefits of these intervention constituents, which are suggested to be situation and design specific, broadly supporting their use, but highlighting concerns about the fidelity of intervention delivery, raising implementation science issues. Three studies examined the effects of informed choice or loss versus gain frame invitations, finding no effect on screening uptake but highlighting opportunistic screening as being more successful for recruiting higher CVD and diabetes risk patients than an invitation letter, with no differences in outcomes once recruited. Two studies examined differences between attenders and non-attenders, finding higher risk factors among non-attenders and higher diagnosed CVD and diabetes among those who later dropped out of longitudinal studies. Conclusions - If the risk and prevalence of these diseases are to be reduced, interventions must take into account what we know about effective health behaviour change mechanisms, monitor delivery by trained professionals and examine the possibility of tailoring programmes according to contexts such as risk level to reach those most in need. Further research is needed to determine the best strategies for lifelong approaches to screening.
Resumo:
Our paper presents the work of the Cuneiform Digital Forensic Project (CDFP), an interdisciplinary project at The University of Birmingham, concerned with the development of a multimedia database to support scholarly research into cuneiform, wedge-shaped writing imprinted onto clay tablets and indeed the earliest real form of writing. We describe the evolutionary design process and dynamic research and developmental cycles associated with the database. Unlike traditional publications, the electronic publication of resources offers the possibility of almost continuous revisions with the integration and support of new media and interfaces. However, if on-line resources are to win the favor and confidence of their respective communities there must be a clear distinction between published and maintainable resources, and, developmental content. Published material should, ideally, be supported via standard web-browser interfaces with fully integrated tools so that users receive a reliable, homogenous and intuitive flow of information and media relevant to their needs. We discuss the inherent dynamics of the design and publication of our on-line resource, starting with the basic design and maintenance aspects of the electronic database, which includes photographic instances of cuneiform signs, and shows how the continuous review process identifies areas for further research and development, for example, the “sign processor” graphical search tool and three-dimensional content, the results of which then feedback into the maintained resource.
Resumo:
The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
An electronic database support system for strategic planning activities can be built by providing conceptual and system specific information. The design and development of this type of system center around the information needs of strategy planners. Data that supply information on the organization's internal and external environments must be originated, evaluated, collected, organized, managed, and analyzed. Strategy planners may use the resulting information to improve their decision making.
Resumo:
Current methods of understanding microbiome composition and structure rely on accurately estimating the number of distinct species and their relative abundance. Most of these methods require an efficient PCR whose forward and reverse primers bind well to the same, large number of identifiable species, and produce amplicons that are unique. It is therefore not surprising that currently used universal primers designed many years ago are not as efficient and fail to bind to recently cataloged species. We propose an automated general method of designing PCR primer pairs that abide by primer design rules and uses current sequence database as input. Since the method is automated, primers can be designed for targeted microbial species or updated as species are added or deleted from the database. In silico experiments and laboratory experiments confirm the efficacy of the newly designed primers for metagenomics applications.
Resumo:
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
Resumo:
The purpose of this thesis was to develop an efficient routing protocol which would provide mobility support to the mobile devices roaming within a network. The routing protocol need to be compatible with the existing internet architecture. The routing protocol proposed here is based on the Mobile IP routing protocol and could solve some of the problems existing in current Mobile IP implementation e.g. ingress filtering problem. By implementing an efficient timeout mechanism and introducing Paging mechanism to the wireless network, the protocol minimizes the number of control messages sent over the network. The implementation of the system is primarily done on three components: 1) Mobile devices that need to gain access to the network, 2) Router which would be providing roaming support to the mobile devices and 3) Database server providing basic authentication services on the system. As a result, an efficient IP routing protocol is developed which would provide seamless mobility to the mobile devices on the network.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.