961 resultados para Compensating Transactions
Resumo:
Based on the modified dual core structure, three kinds of special photonic crystal fibers are presented, which are extremely large negative dispersion, super-broad bond, and large area made field dispersion-compensating photonic crystal fibers (DCPCF). For extremely large negative dispersion DCPCF, the peak of negative dispersion reaches -5.9 x 10(4) ps/(mn km). Super-broad bond DCPCF has broadband large negative dispersion and the dispersion value varies linearly from -380 ps/(nm km) to -420 ps/(nm km) in the C band. The designed large area made field DCPCF has a peak dispersion of -1203 ps/(nm km) with the inner core mode area of 47 mu m(2) and outer core mode area of 835 mu m(2). Furthermore, for the large area mode field DCPCF, the experimental result is also obtained. (C) 2008 Wiley Periodicals, Inc.
Realization of highly uniform self-assembled InAs quantum wires by the strain compensating technique
Resumo:
Self-assembled InAs quantum wires (QWRs) on InP(001) substrate have been grown by molecular-beam epitaxy, using a strain compensating technique. Atom force microscope, Transmission electron microscopy, and high-resolution x-ray diffraction are used to characterize their structural properties. We proposed that, by carefully adjusting composition of InAlGaAs buffer layer and strain compensating spacer layers, stacked QWRs with high uniformity could be achieved. In addition, the formation mechanism and vertical anti-correlation of QWRs are also discussed. (c) 2005 American Institute of Physics.
Resumo:
Web services can be seen as a newly emerging research area for Service-oriented Computing and their implementation in Service-oriented Architectures. Web services are self-contained, self-describing modular applications or components providing services. Web services may be dynamically aggregated, composed, and enacted as Web services Workflows. This requires frameworks and interaction protocols for their co-ordination and transaction support. In a Service-oriented Computing setting, transactions are more complex, involve multiple parties (roles), span many organizations, and may be long-running, consisting of a highly decentralized service partner and performed by autonomous entities. A Service-oriented Transaction Model has to provide comprehensive support for long-running propositions including negotiations, conversations, commitments, contracts, tracking, payments, and exception handling. Current transaction models and mechanisms including their protocols and primitives do not sufficiently cater for quality-aware and long running transactions comprising loosely-coupled (federated) service partners and resources. Web services transactions require co-ordination behavior provided by a traditional transaction mechanism to control the operations and outcome of an application. Furthermore, Web services transactions require the capability to handle the co-ordination of processing outcomes or results from multiple services in a more flexible manner. This requires more relaxed forms of transactions—those that do not strictly have to abide by the ACID properties—such as loosely-coupled collaboration and workflows. Furthermore, there is a need to group Web services into applications that require some form of correlation, but do not necessarily require transactional behavior. The purpose of this paper is to provide a state-of-the-art review and overview of some proposed standards surrounding Web services composition, co-ordination, and transaction. In particular the Business Process Execution Language for Web services (BPEL4WS), its co-ordination, and transaction frameworks (WS-Co-ordination and WS-Transaction) are discussed.
Resumo:
We propose and evaluate admission control mechanisms for ACCORD, an Admission Control and Capacity Overload management Real-time Database framework-an architecture and a transaction model-for hard deadline RTDB systems. The system architecture consists of admission control and scheduling components which provide early notification of failure to submitted transactions that are deemed not valuable or incapable of completing on time. In particular, our Concurrency Admission Control Manager (CACM) ensures that transactions which are admitted do not overburden the system by requiring a level of concurrency that is not sustainable. The transaction model consists of two components: a primary task and a compensating task. The execution requirements of the primary task are not known a priori, whereas those of the compensating task are known a priori. Upon the submission of a transaction, the Admission Control Mechanisms are employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its compensating task is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. In that respect, we describe a number of concurrency admission control strategies and contrast (through simulations) their relative performance.
Resumo:
This paper is about performance assessment in serious games. We conceive serious gaming as a process of player-lead decision taking. Starting from combinatorics and item-response theory we provide an analytical model that makes explicit to what extent observed player performances (decisions) are blurred by chance processes (guessing behaviors). We found large effects both theoretically and practically. In two existing serious games random guess scores were found to explain up to 41% of total scores. Monte Carlo simulation of random game play confirmed the substantial impact of randomness on performance. For valid performance assessments, be it in-game or post-game, the effects of randomness should be included to produce re-calibrated scores that can reasonably be interpreted as the players´ achievements.
Resumo:
We consider homogeneous two-sided markets, in which connected buyer-seller pairs bargain and trade repeatedly. In this infinite market game with exogenous matching probabilities and a common discount factor, we prove the existence of equilibria in stationary strategies. The equilibrium payoffs are given implicitly as a solution to a system of linear equations. Then, we endogenize the matching mechanism in a link formation stage that precedes the market game. When agents are sufficiently patient and link costs are low, we provide an algorithm to construct minimally connected networks that are pairwise stable with respect to the expected payoffs in the trading stage. The constructed networks are essentially efficient and consist of components with a constant buyer-seller ratio. The latter ratio increases (decreases) for a buyer (seller) that deletes one of her links in a pairwise stable component.