868 resultados para Transaction Processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tapahtumankäsittelyä pidetään yleisesti eräänä luotettavan tietojenkäsittelyn perusvaatimuksena. Tapahtumalla tarkoitetaan operaatiosarjaa, jonka suoritusta voidaan pitää yhtenä loogisena toimenpiteenä. Tapahtumien suoritukselle on asetettu neljä perussääntöä, joista käytetään lyhennettä ACID. Hajautetuissa järjestelmissä tapahtumankäsittelyn tarve kasvaa entisestään, sillä toimenpiteiden onnistumista ei voida varmistaa pelkästään paikallisten menetelmien avulla. Hajautettua tapahtumankäsittelyä on yritetty standardoida useaan otteeseen, muttayrityksistä huolimatta siitä ei ole olemassa yleisesti hyväksyttyjä ja avoimia standardeja. Lähimpänä tällaisen standardin asemaa on todennäköisesti X/Open DTP-standardiperhe ja varsinkin siihen kuuluva XA-standardi. Tässä työssä on lisäksi tutkittu, kuinka Intellitel ONE-järjestelmän valmistajariippumatonta tietokanta-arkkitehtuuria tulisi kehittää, kun tavoitteena on mahdollistaa sen avulla suoritettavien tapahtumankäsittelyä vaativien sovellusten käyttäminen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena on selvittää, millaisia tuote-ja tilaustietojen hallinnan malleja tukkukaupan toimitusketjussa on käytössä. Kaupan alan tilaus-toimitusketju on hyvin tietointensiivinen prosessi, ja siksi kiinnostava tiedon hallinnan näkökulmasta. Työssä perehdytään kolmen Suomessa toimivan caseyrityksen käytäntöihin päivittäistavara- kaupan, lääketukkukaupan ja sähkötarviketukkukaupan alalta. Toimitusketjun keskipisteenä on tässä työssä tukkukaupan organisaatio. Tiedon välitystä tarkastellaan tukkukaupan organisaation ja sen asiakkaiden välillä (B2B) ja toisaalta tukkukaupan ja sen ensimmäisen tason päämiesten / tavarantoimittajien välillä. Tutkimuksen keskeistä aineistoa ovat henkilökohtaiset haastattelut. Lisäksi tutkimuskysymyksiin etsittiin vastauksia myös alan kirjallisuudesta. Työ antaa yleiskuvaa tukkukaupan organisaatioiden tuote- ja tilaustietojen hallinnan nykytilasta ja kehityskohteista. Työ tarjoaa myös vertailupohjaa tuote- ja tilaustietojen hallinnan kehittämistä suunnitteleville muille organisaatioille. Kaikilla tarkastelluilla toimialoilla tuote- ja tilaustietojen hallintaa ohjaa pyrkimys kustannustehokkaaseen toimintatapaan. Tuotetietojen hallinnassa ensiarvoisen tärkeää on tuotetiedon nopea saatavuus, luotettavuus ja virheettömyys toimialasta riippumatta. Päivittäistavarakaupan tuotetietojen keskitettyä hallintaa voi perustellusti esittää malliksi myös muille kaupan aloille. Yhteistyökumppaneiden suuri määrä ja kasvavat tietovirrat korostavat transaktioiden prosessoinnin merkitystä toimitusketjussa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tapahtumat ovat perusta monille nykyaikaisille tietoteknisille palveluille. Yksittäinen tapahtuma voidaan nähdä työtehtävänä, joka käsittelevän järjestelmän tulee suorittaa. Tapahtumankäsittely pyrkii pitämään järjestelmän tunnetussa ja ristiriidattomassa tilassa. Tämä toteutetaan pitämällä huolta, että jokainen tapahtuma joko onnistuu tai epäonnistuu kokonaisuudessaan. Tapahtumankäsittelyjärjestelmät ovat kasvaneet ja yhtäaikaisten käsiteltävien tapahtumien määrä noussut palveluiden siirtyessä yhä enemmän tietoverkkoihin. Samalla järjestelmien kehittäminen ja ylläpito vaikeutuvat, jolloin kehittäjät tarvitsevat parempia työkaluja järjestelmän valvontaan. Tapahtumankäsittelyn seuranta pyrkii seuraamaan järjestelmän sisäistä toimintaa yksittäisen tapahtuman tai osatapahtuman tarkkuudella. Riippuen toteutuksesta kehittäjä voi joko tarkkailla järjestelmää reaaliaikaisesti tai jälkikäteen suorituksen perusteella tallennetun seurantatiedon avulla. Työssä esitellään tapahtumanseurantakomponentin suunnitteluprosessi ratkaisuineen, joka mahdollistaa tapahtumien suorituksen tarkkailun, seurantatiedon tallentamisen sekä tulosten tarkastelun jälkikäteen. Työ on toteutettu osaksi Syncron Tech Oy:n Syncware-ohjelmistoalustaa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this study is to find out how sales management can be optimally supported with business information and knowledge. The first chapters of the study focus on theoretical literature about sales planning, sales steering, business intelligence, and knowledge management. The empirical part of the study is a case study for which the material was collected through interviews with the selected people of the company. The findings from the interviews were analyzed, and possible suggestions for solving the problems were made. The case study revealed that sales management requires a multitude of metrics and reports to steer the sales to the desired direction. The information sources can be internal and external, and the optimal solution for satisfying the information needs is a combination of both of these. The simple information should be turned into knowledge by merging the intellectual assets with the information from the firm’s transaction processing systems, in order to promote organizational learning and effective decision-making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Firms began outsourcing information system functions soon after the inception of electronic computing. Extant research has concentrated on large organizations and large-valued outsourcing contracts from a variety of different industries. Smaller-sized firms are inherently different from their large counterparts. These differences between small and large firms could lead to different information technology/information system (IT/IS) items being outsourced and different outsourcing agreements governing these arrangements. This research explores and examines the outsourcing practices of very small through to medium-sized manufacturing organizations. The in-depth case studies not only explored the extent to which different firms engaged in outsourcing but also the nuances of their outsourcing arrangements. The results reveal that all six firms tended to outsource the same sorts of functions. Some definite differences existed, however, in the strategies adopted in relation to the functions they outsourced. These differences arose for a variety of reasons, including size, locality, and holding company influences. The very small and small manufacturing firms tended to make outsourcing purchases on an ad hoc basis with little reliance on legal advice. In contrast, the medium-sized firms often used a more planned initiative and sought legal advice more often. Interestingly, not one of the six firms outsourced any of their transaction processing. These findings now give very small, small-, and medium-sized manufacturing firms the opportunity to compare their practices against other firms of similar size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A major inhibitor to e-commerce stems from the reluctance consumers have to complete transactions because of concern over the use of private information divulged in online transaction processing. Because e-commerce occurs in a global environment, cultural factors are likely to have a significant impact on this concern. Building on work done in the area of culture and privacy, and also trust and privacy, we explore the three way relationship between culture, privacy and trust. Better, more appropriate, and contemporary measures of culture have recently been espoused, and a better understanding and articulation of internet users information privacy concern has been developed. We present the results of an exploratory study that builds on the work of Milberg, Gefen, and Bellman to better understand and test the effect that national culture has on trust and internet privacy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.