873 resultados para Distributed database systems
Resumo:
The stimulation technique has gained much importance in the performance studies of Concurrency Control (CC) algorithms for distributed database systems. However, details regarding the simulation methodology and implementation are seldom mentioned in the literature. One objective of this paper is to elaborate the simulation methodology using SIMULA. Detailed studies have been carried out on a centralised CC algorithm and its modified version. The results compare well with a previously reported study on these algorithms. Here, additional results concerning the update intensiveness of transactions and the degree of conflict are obtained. The degree of conflict is quantitatively measured and it is seen to be a useful performance index. Regression analysis has been carried out on the results, and an optimisation study using the regression model has been performed to minimise the response time. Such a study may prove useful for the design of distributed database systems.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.
Resumo:
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems where enforcing security is crucial to the success of the enterprise. For real-time database systems supporting applications with firm deadlines, we investigate here the performance implications, in terms of killed transactions, of guaranteeing multilevel secrecy. In particular, we focus on the concurrency control (CC) aspects of this issue. Our main contributions are the following: First, we identify which among the previously proposed real-time CC protocols are capable of providing covert-channel-free security. Second, using a detailed simulation model, we profile the real-time performance of a representative set of these secure CC protocols for a variety of security-classified workloads and system configurations. Our experiments show that a prioritized optimistic CC protocol, OPT-WAIT, provides the best overall performance. Third, we propose and evaluate a novel "dual-CC" approach that allows the real-time database system to simultaneously use different CC mechanisms for guaranteeing security and for improving real-time performance. By appropriately choosing these different mechanisms, concurrency control protocols that provide even better performance than OPT-WAIT are designed. Finally, we propose and evaluate GUARD, an adaptive admission-control policy designed to provide fairness with respect to the distribution of killed transactions across security levels. Our experiments show that GUARD efficiently provides close to ideal fairness for real-time applications that can tolerate covert channel bandwidths of upto one bit per second.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
Usually, a Petri net is applied as an RFID model tool. This paper, otherwise, presents another approach to the Petri net concerning RFID systems. This approach, called elementary Petri net inside an RFID distributed database, or PNRD, is the first step to improve RFID and control systems integration, based on a formal data structure to identify and update the product state in real-time process execution, allowing automatic discovery of unexpected events during tag data capture. There are two main features in this approach: to use RFID tags as the object process expected database and last product state identification; and to apply Petri net analysis to automatically update the last product state registry during reader data capture. RFID reader data capture can be viewed, in Petri nets, as a direct analysis of locality for a specific transition that holds in a specific workflow. Following this direction, RFID readers storage Petri net control vector list related to each tag id is expected to be perceived. This paper presents PNRD cornerstones and a PNRD implementation example in software called DEMIS Distributed Environment in Manufacturing Information Systems.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
Supercapacitors are increasingly used as short term energy storage elements in distributed generation systems. The traditional approach in integrating them to the main system is the use of interfacing dc-dc converters which introduce additional costs and power losses. This paper therefore, presents a novel direct integration scheme for supercapacitors and thereby eliminates associated costs and power losses of interfacing converters. The idea is simply to replace ordinary capacitors of three-level flying-capacitor rectifiers with supercapacitors and operate them under variable voltage conditions. An analysis on the reduction of power losses by the proposed system is presented. Furthermore, supercapacitor sizing and implementation issues such as effects of the variable voltage operation and resistive behavior of supercapacitors at high frequencies are also discussed. Simulation results are presented to verify the efficacy of the proposed system in suppressing short term power fluctuations in wind generation system.
Resumo:
With new national targets for patient flow in public hospitals designed to increase efficiencies in patient care and resource use, better knowledge of events affecting length of stay will support improved bed management and scheduling of procedures. This paper presents a case study involving the integration of material from each of three databases in operation at one tertiary hospital and demonstrates it is possible to follow patient journeys from admission to discharge. What is known about this topic? At present, patient data at one Queensland tertiary hospital are assembled in three information systems: (1) the Hospital Based Corporate Information System (HBCIS), which tracks patients from in-patient admission to discharge; (2) the Emergency Department Information System (EDIS) containing patient data from presentation to departure from the emergency department; and (3) Operation Room Management Information System (ORMIS), which records surgical operations. What does this paper add? This paper describes how a new enquiry tool may be used to link the three hospital information systems for studying the hospital journey through different wards and/or operating theatres for both individual and groups of patients. What are the implications for practitioners? An understanding of the patients’ journeys provides better insight into patient flow and provides the tool for research relating to access block, as well as optimising the use of physical and human resources.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.