816 resultados para Enterprise games
Resumo:
Celem artykułu jest nakreślenie możliwych kierunków badań w zakresie analizy dyskursu i szerzej pragmalingwistyki. Służy temu wyodrębnienie najważniejszych relacji komunikacyjnych zachodzących w klasycznych grach fabularnych (ang. Role-Playing Games), a także wskazanie zbieżności zachodzących pomiędzy relacjami uczestników techniki gier fabularnych jako komunikacyjnej techniki nauczania języków obcych a założeniami autonomizacji glottodydaktyki na poziomie akademickim, związanymi z ideą współpracy dydaktycznej i relacji wielopodmiotowej.
Resumo:
To Augustyn Surdyk numerous assumptions of constructivism and constructionism in the educational context seem to correspond with the idea of autonomisation in foreign language didactics. He presents a comparison of selected aspects of the three theories in question on the example of an innovative communicative technique of Role-Playing Games applied in the process of teaching foreign languages at an advanced level. The conventions of the technique with its simplified rules have been borrowed from popular parlour games and adapted by the author to the conditions of language didactics. The elements of play and simulation incorporated in the technique allow it to be rated among techniques of ludic strategy. (from Preface to the book)
Resumo:
http://www.archive.org/details/divineenterprise00pieruoft
Resumo:
http://www.archive.org/details/upontheearththem013276mbp
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
The psychology of immersion and development of a quantitative measure of immersive response in games
Resumo:
This study sets out to investigate the psychology of immersion and the immersive response of individuals in relation to video and computer games. Initially, an exhaustive review of literature is presented, including research into games, player demographics, personality and identity. Play in traditional psychology is also reviewed, as well as previous research into immersion and attempts to define and measure this construct. An online qualitative study was carried out (N=38), and data was analysed using content analysis. A definition of immersion emerged, as well as a classification of two separate types of immersion, namely, vicarious immersion and visceral immersion. A survey study (N=217) verified the discrete nature of these categories and rejected the null hypothesis that there was no difference between individuals' interpretations of vicarious and visceral immersion. The primary aim of this research was to create a quantitative instrument which measures the immersive response as experienced by the player in a single game session. The IMX Questionnaire was developed using data from the initial qualitative study and quantitative survey. Exploratory Factor Analysis was carried out on data from 300 participants for the IMX Version 1, and Confirmatory Factor Analysis was conducted on data from 380 participants on the IMX Version 2. IMX Version 3 was developed from the results of these analyses. This questionnaire was found to have high internal consistency reliability and validity.
Resumo:
The desire to obtain competitive advantage is a motivator for implementing Enterprise Resource Planning (ERP) Systems (Adam & O’Doherty, 2000). However, while it is accepted that Information Technology (IT) in general may contribute to the improvement of organisational performance (Melville, Kraemer, & Gurbaxani, 2004), the nature and extent of that contribution is poorly understood (Jacobs & Bendoly, 2003; Ravichandran & Lertwongsatien, 2005). Accordingly, Henderson and Venkatraman (1993) assert that it is the application of business and IT capabilities to develop and leverage a firm’s IT resources for organisational transformation, rather than the acquired technological functionality, that secures competitive advantage for firms. Application of the Resource Based View of the firm (Wernerfelt, 1984) and Dynamic Capabilities Theory (DCT) (Teece and Pisano (1998) in particular) may yield insights into whether or not the use of Enterprise Systems enhances organisations’ core capabilities and thereby obtains competitive advantage, sustainable or otherwise (Melville et al., 2004). An operational definition of Core Capabilities that is independent of the construct of Sustained Competitive Advantage is formulated. This Study proposes and utilises an applied Dynamic Capabilities framework to facilitate the investigation of the role of Enterprise Systems. The objective of this research study is to investigate the role of Enterprise Systems in the Core Dynamic Capabilities of Asset Lifecycle Management. The Study explores the activities of Asset Lifecycle Management, the Core Dynamic Capabilities inherent in Asset Lifecycle Management and the footprint of Enterprise Systems on those Dynamic Capabilities. Additionally, the study explains the mechanisms by which Enterprise Systems sustain the Exploitability and the Renewability of those Core Dynamic Capabilities. The study finds that Enterprise Systems contribute directly to the Value, Exploitability and Renewability of Core Dynamic Capabilities and indirectly to their Inimitability and Non-substitutability. The study concludes by presenting an applied Dynamic Capabilities framework, which integrates Alter (1992)’s definition of Information Systems with Teece and Pisano (1998)’s model of Dynamic Capabilities to provide a robust diagnostic for determining the sustained value generating contributions of Enterprise Systems. These frameworks are used in the conclusions to frame the findings of the study. The conclusions go on to assert that these frameworks are free - standing and analytically generalisable, per Siggelkow (2007) and Yin (2003).
Resumo:
Our research follows a design science approach to develop a method that supports the initialization of ES implementation projects – the chartering phase. This project phase is highly relevant for implementation success, but is understudied in IS research. In this paper, we derive design principles for a chartering method based on a systematic review of ES implementation literature and semi-structured expert interviews. Our analysis identifies differences in the importance of certain success factors depending on the system type. The proposed design principles are built on these factors and are linked to chartering key activities. We specifically consider system-type-specific chartering aspects for process-centric Business Intelligence & Analytics (BI&A) systems, which are an emerging class of systems at the intersection of BI&A and business process management. In summary, this paper proposes design principles for a chartering method – considering specifics of process-centric BI&A.
Resumo:
We use information from the television game show with the highest guaranteed average payoff in the United States, Hoosier Millionaire, to analyze risktaking in a high-stakes experiment. We characterize gambling decisions under alternative assumptions about contestant behavior and preferences, and derive testable restrictions on individual risk attitudes based on this characterization. We then use an extensive sample of gambling decisions to estimate distributions of risk-aversion parameters consistent with the theoretical restrictions and revealed preferences. We find that although most contestants display risk-averse preferences, the extent of the risk aversion implied by our estimates varies substantially with the stakes involved in the different decisions.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
This paper presents a new partial two-player game, called the cannibal animal game, which is a variant of Tic-Tac-Toe. The game is played on the infinite grid, where in each round a player chooses and occupies free cells. The first player Alice can occupy a cell in each turn and wins if she occupies a set of cells, the union of a subset of which is a translated, reflected and/or rotated copy of a previously agreed upon polyomino P (called an animal). The objective of the second player Bob is to prevent Alice from creating her animal by occupying in each round a translated, reflected and/or rotated copy of P. An animal is a cannibal if Bob has a winning strategy, and a non-cannibal otherwise. This paper presents some new tools, such as the bounding strategy and the punching lemma, to classify animals into cannibals or non-cannibals. We also show that the pairing strategy works for this problem.
Resumo:
info:eu-repo/semantics/submittedForPublication
Resumo:
In this paper, we have considered the problem of selection of available repertoires. With Ab2 as immunogens, we have used the idiotypic cascade to explore potential repertoires. Our results suggest that potential idiotypic repertoires are more or less the same within a species or between different species. A given idiotype "à la Oudin" can become a recurrent one within the same outbred species or within different species. Similarly, an intrastrain crossreactive idiotype can be induced in other strains, even though there is a genetic disparity between these strains. The structural basis of this phenomenon has been explored. We next examined results showing the loss and gain of recurrent idiotypes without any intentional idiotypic manipulation. A recurrent idiotype can be lost in a syngeneic transfer and a private one can become recurrent by changing the genetic background. The change of available idiotypic repertoires at the B cell level has profound influences on the idiotypic repertoires of suppressor T cells. All these results imply that idiotypic games are played by the immune system itself, a strong suggestion that the immune system is a functional idiotypic network.
Resumo:
Tony Mann provides a review of the book: Theory of Games and Economic Behavior, John von Neumann and Oskar Morgenstern, Princeton University Press, 1944.