990 resultados para transaction processing
Resumo:
One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.
Resumo:
The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.
Resumo:
The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.
Resumo:
This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.
Resumo:
It is well accepted that different types of distributed architectures require different degrees of coupling. For example, in client-server and three-tier architectures, application components are generally tightly coupled, both with one another and with the underlying middleware. Meanwhile, in off-line transaction processing, grid computing and mobile applications, the degree of coupling between application components and with the underlying middleware needs to be minimized. Terms such as ‘synchronous’, ‘asynchronous’, ‘blocking’, ‘non-blocking’, ‘directed’, and ‘non-directed’ are often used to refer to the degree of coupling required by an architecture or provided by a middleware. However, these terms are used with various connotations. Although various informal definitions have been provided, there is a lack of an overarching formal framework to unambiguously communicate architectural requirements with respect to (de-)coupling. This article addresses this gap by: (i) formally defining three dimensions of (de-)coupling; (ii) relating these dimensions to existing middleware; and (iii) proposing notational elements to represent various coupling integration patterns. This article also discusses a prototype that demonstrates the feasibility of its implementation.
Resumo:
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.
Resumo:
Many business processes in enterprise applications are both long running and transactional in nature. However, no current transaction model can provide full transaction support for such long running business processes. This paper proposes a new transaction model, the pessimistic predicate/transform (PP/T) model, which can provide full transaction support for long running business processes. A framework was proposed on the enterprise JavaBeans platform to implement the PP/T model. The framework enables application developers to focus on the business logic, with the underlying platform providing the required transactional semantics. The development and maintenance effort are therefore greatly reduced. Simulations show that the model has a sound concurrency management ability for long running business processes.
Resumo:
事务处理技术是保证信息可靠性和一致性的重要技术。事务是具有ACID(atomicity, consistency, isolationanddurability)特性的原子操作序列,它的概念最早来湖泊于数据库管理系统,用来保证应用程序对数据库访问的一致性和可靠性。在早期应用中,商用DBMS系统内部集成的事务管理器提供应用所需的事务处理功能。随着网络技术的发展以及应用需求的变化,以往集中式应用演化发展为网络分布应用,数据和处理分布在不同的计算机上。此时事务管理功能由专门的中间件(例如事务监控器)提供,事务处理技术也发展为分布式事务处理。
Resumo:
在移动数据库系统中 ,计算平台的移动性、频繁的断接性以及长事务等特性使得传统的事务处理模式不再适用 .为了解决移动数据库中的事务处理问题 ,提出了一种新的移动事务处理模型——乐观两阶段提交移动事务模型 ( O2 PC-MT) .该模型采用乐观并发控制与两阶段提交协议相结合的方法 ,对移动事务的长事务特性提供了灵活与有效的支持 ;此外 ,该模型允许移动计算机分多次发送事务操作 ,且在事务执行的过程中可以任意移动 ,从而提供了对交互式事务及随意移动性的支持 .实验结果表明 ,与基于两段锁协议及其变形的其它移动事务处理模型相比 ,O2 PC-MT提高了系统的事务吞吐率并改善了系统的总体性能
Resumo:
This report addresses the problem of fault tolerance to system failures for database systems that are to run on highly concurrent computers. It assumes that, in general, an application may have a wide distribution in the lifetimes of its transactions. Logging remains the method of choice for ensuring fault tolerance. Generational garbage collection techniques manage the limited disk space reserved for log information; this technique does not require periodic checkpoints and is well suited for applications with a broad range of transaction lifetimes. An arbitrarily large collection of parallel log streams provide the necessary disk bandwidth.
Resumo:
With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance.
Resumo:
Firms began outsourcing information system functions soon after the inception of electronic computing. Extant research has concentrated on large organizations and large-valued outsourcing contracts from a variety of different industries. Smaller-sized firms are inherently different from their large counterparts. These differences between small and large firms could lead to different information technology/information system (IT/IS) items being outsourced and different outsourcing agreements governing these arrangements. This research explores and examines the outsourcing practices of very small through to medium-sized manufacturing organizations. The in-depth case studies not only explored the extent to which different firms engaged in outsourcing but also the nuances of their outsourcing arrangements. The results reveal that all six firms tended to outsource the same sorts of functions. Some definite differences existed, however, in the strategies adopted in relation to the functions they outsourced. These differences arose for a variety of reasons, including size, locality, and holding company influences. The very small and small manufacturing firms tended to make outsourcing purchases on an ad hoc basis with little reliance on legal advice. In contrast, the medium-sized firms often used a more planned initiative and sought legal advice more often. Interestingly, not one of the six firms outsourced any of their transaction processing. These findings now give very small, small-, and medium-sized manufacturing firms the opportunity to compare their practices against other firms of similar size.
Resumo:
A major inhibitor to e-commerce stems from the reluctance consumers have to complete transactions because of concern over the use of private information divulged in online transaction processing. Because e-commerce occurs in a global environment, cultural factors are likely to have a significant impact on this concern. Building on work done in the area of culture and privacy, and also trust and privacy, we explore the three way relationship between culture, privacy and trust. Better, more appropriate, and contemporary measures of culture have recently been espoused, and a better understanding and articulation of internet users information privacy concern has been developed. We present the results of an exploratory study that builds on the work of Milberg, Gefen, and Bellman to better understand and test the effect that national culture has on trust and internet privacy.
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.