12 resultados para Centralised data warehouse Architecture

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a supply chain depends critically on the coordinating actions and decisions undertaken by the trading partners. The sharing of product and process information plays a central role in the coordination and is a key driver for the success of the supply chain. In this paper we propose the concept of "Linked pedigrees" - linked datasets, that enable the sharing of traceability information of products as they move along the supply chain. We present a distributed and decentralised, linked data driven architecture that consumes real time supply chain linked data to generate linked pedigrees. We then present a communication protocol to enable the exchange of linked pedigrees among trading partners. We exemplify the utility of linked pedigrees by illustrating examples from the perishable goods logistics supply chain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most of the existing work on information integration in the Semantic Web concentrates on resolving schema-level problems. Specific issues of data-level integration (instance coreferencing, conflict resolution, handling uncertainty) are usually tackled by applying the same techniques as for ontology schema matching or by reusing the solutions produced in the database domain. However, data structured according to OWL ontologies has its specific features: e.g., the classes are organized into a hierarchy, the properties are inherited, data constraints differ from those defined by database schema. This paper describes how these features are exploited in our architecture KnoFuss, designed to support data-level integration of semantic annotations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel architecture for microwave/millimeter-wave signal generation and data modulation using a fiber-grating-based distributed feedback laser has been proposed in this letter. For demonstration, a 155.52-Mb/s data stream on a 16.9-GHz subcarrier has been transmitted and recovered successfully. It has been proved that this technology would be of benefit to future microwave data transmission systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost a decade has passed since the objectives and benefits of autonomic computing were stated, yet even the latest system designs and deployments exhibit only limited and isolated elements of autonomic functionality. In previous work, we identified several of the key challenges behind this delay in the adoption of autonomic solutions, and proposed a generic framework for the development of autonomic computing systems that overcomes these challenges. In this article, we describe how existing technologies and standards can be used to realise our autonomic computing framework, and present its implementation as a service-oriented architecture. We show how this implementation employs a combination of automated code generation, model-based and object-oriented development techniques to ensure that the framework can be used to add autonomic capabilities to systems whose characteristics are unknown until runtime. We then use our framework to develop two autonomic solutions for the allocation of server capacity to services of different priorities and variable workloads, thus illustrating its application in the context of a typical data-centre resource management problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research was conducted at the Space Research and Technology Centre o the European Space Agency at Noordvijk in the Netherlands. ESA is an international organisation that brings together a range of scientists, engineers and managers from 14 European member states. The motivation for the work was to enable decision-makers, in a culturally and technologically diverse organisation, to share information for the purpose of making decisions that are well informed about the risk-related aspects of the situations they seek to address. The research examined the use of decision support system DSS) technology to facilitate decision-making of this type. This involved identifying the technology available and its application to risk management. Decision-making is a complex activity that does not lend itself to exact measurement or precise understanding at a detailed level. In view of this, a prototype DSS was developed through which to understand the practical issues to be accommodated and to evaluate alternative approaches to supporting decision-making of this type. The problem of measuring the effect upon the quality of decisions has been approached through expert evaluation of the software developed. The practical orientation of this work was informed by a review of the relevant literature in decision-making, risk management, decision support and information technology. Communication and information technology unite the major the,es of this work. This allows correlation of the interests of the research with European public policy. The principles of communication were also considered in the topic of information visualisation - this emerging technology exploits flexible modes of human computer interaction (HCI) to improve the cognition of complex data. Risk management is itself an area characterised by complexity and risk visualisation is advocated for application in this field of endeavour. The thesis provides recommendations for future work in the fields of decision=making, DSS technology and risk management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Warehouse is an essential component in the supply chain, linking the chain partners and providing them with functions of product storage, inbound and outbound operations along with value-added processes. Allocation of warehouse resources should be efficient and effective to achieve optimum productivity and reduce operational costs. Radio frequency identification (RFID) is a technology capable of providing real-time information about supply chain operations. It has been used by warehousing and logistic enterprises to achieve reduced shrinkage, improved material handling and tracking as well as increased accuracy of data collection. However, both academics and practitioners express concerns about challenges to RFID adoption in the supply chain. This paper provides a comprehensive analysis of the problems encountered in RFID implementation at warehouses, discussing the theoretical and practical adoption barriers and causes of not achieving full potential of the technology. Lack of foreseeable return on investment (ROI) and high costs are the most commonly reported obstacles. Variety of standards and radio wave frequencies are identified as source of concern for decision makers. Inaccurate performance of the RFID within the warehouse environment is examined. Description of integration challenges between warehouse management system and RFID technology is given. The paper discusses the existing solutions to technological, investment and performance RFID adoption barriers. Factors to consider when implementing the RFID technology are given to help alleviate implementation problems. By illustrating the challenges of RFID in the warehouse environment and discussing possible solutions the paper aims to help both academics and practitioners to focus on key areas constituting an obstacle to the technology growth. As more studies will address these challenges, the realisation of RFID benefits for warehouses and supply chain will become a reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years the increased interest in introducing radio frequency technology (RFID) in warehousing was observed. First adopters of RFID reported numerous benefits, which included: reduced shrinkage, real-time tracking and better accuracy of data collection. Along with the academic and industrial discussion on benefits which can be achieved in RFID enabled warehouses there are reports on issues related to adoption of RFID technology in warehousing. This paper reviews results of scientific reports of RFID implementation in warehouses and discusses the adoption barriers and causes of not achieving full potential of the technology. Following adoption barriers are identified and set in warehousing context: lack of forseeable return on investment (ROI), unreliable performance of RFID systems, standarisation, integration with legacy systems and privacy/security concerns. As more studies will address these challenges, the realisation of RFID benefits for warehouses will become reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel architecture for microwave/millimeter-wave signal generation and data modulation using a fiber-grating-based distributed feedback laser has been proposed in this letter. For demonstration, a 155.52-Mb/s data stream on a 16.9-GHz subcarrier has been transmitted and recovered successfully. It has been proved that this technology would be of benefit to future microwave data transmission systems. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The human retinal vasculature has been demonstrated to exhibit fractal, or statistically self similar properties. Fractal analysis offers a simple quantitative method to characterise the complexity of the branching vessel network in the retina. Several methods have been proposed to quantify the fractal properties of the retina. Methods: Twenty five healthy volunteers underwent retinal photography, retinal oximetry and ocular biometry. A robust method to evaluate the fractal properties of the retinal vessels is proposed; it consists of manual vessel segmentation and box counting of 50 degree retinal photographs centred on the fovea. Results: Data is presented on the associations between the fractal properties of the retinal vessels and various functional properties of the retina. Conclusion Fractal properties of the retina could offer a promising tool to assess the risk and prognostic factors that define retinal disease. Outstanding efforts surround the need to adopt a standardised protocol for assessing the fractal properties of the retina, and further demonstrate its association with disease processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the current challenges in model-driven engineering is enabling effective collaborative modelling. Two common approaches are either storing the models in a central repository, or keeping them under a traditional file-based version control system and build a centralized index for model-wide queries. Either way, special attention must be paid to the nature of these repositories and indexes as networked services: they should remain responsive even with an increasing number of concurrent clients. This paper presents an empirical study on the impact of certain key decisions on the scalability of concurrent model queries, using an Eclipse Connected Data Objects model repository and a Hawk model index. The study evaluates the impact of the network protocol, the API design and the internal caching mechanisms and analyzes the reasons for their varying performance.