727 resultados para Electronic commerce - Customer services - Management
Resumo:
This study extends previous media equation research, which showed that the effects of flattery from a computer can produce the same general effects as flattery from humans. Specifically, the study explored the potential moderating effect of experience on the impact of flattery from a computer. One hundred and fifty-eight students from the University of Queensland voluntarily participated in the study. Participants interacted with a computer and were exposed to one of three kinds of feedback: praise (sincere praise), flattery (insincere praise), or control (generic feedback). Questionnaire measures assessing participants' affective state. attitudes and opinions were taken. Participants of high experience, but not low experience, displayed a media equation pattern of results, reacting to flattery from a computer in a manner congruent with peoples' reactions to flattery from other humans. High experience participants tended to believe that the computer spoke the truth, experienced more positive affect as a result of flattery, and judged the computer's performance more favourably. These findings are interpreted in light of previous research and the implications for software design in fields such as entertainment and education are considered. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This study takes a direct approach to determine management motivation for the use of financial derivatives. We survey a sample of Australian firms on attitudes to derivative use and financial risk management. Management views are sought on the importance of a series of theoretical reasons for using derivatives. Generally, we find that managers are focused on the broad reduction of risk and volatility of cash flows and earnings in using derivatives. Specific issues such as reducing bankruptcy costs, debt levels and taxation are not considered as important. A further interesting result from this research is that even though firms may use derivatives they may not necessarily hedge all of their annual exposures across different financial risks. This helps explain the inconsistency of results in many empirical studies on the determinants of derivative use.
Resumo:
Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.
Resumo:
This paper explains what happened during a three years long qualitative study at a mental health services organization. The study focuses on differences between espoused theory and theory in use during the implementation of a new service delivery model. This major organizational change occurred in a National policy environment of major health budget cutbacks. Primarily as a result of poor resourcing provided to bring about policy change and poor implementation of a series of termination plans, a number of constraints to learning contributed to the difficulties in implementing the new service delivery model. The study explores what occurred during the change process. Rather than blame participants of change for the poor outcomes, the study is set in a broader context of a policy environment—that of major health cutbacks.
Resumo:
Case studies of knowledge management practices are often conducted in organizations where the aim is to manage knowledge for future operational improvements. What about knowledge management for organizations with limited life-spans that are preparing for closure? Such organizations are not common but can benefit from knowledge management strategy. This case study concerns the knowledge management strategy of an organization that is preparing for its final phase of operations. We facilitated two group workshops with senior managers to scope a strategy, following which the organization initiated a set of projects to implement the resulting actions. This paper reviews their implemented actions against those designed in the workshop to shed light on knowledge management in this uncommon situation.
Resumo:
The present global economic crisis creates doubts about the good use of accumulated experience and knowledge in managing risk in financial services. Typically, risk management practice does not use knowledge management (KM) to improve and to develop new answers to the threats. A key reason is that it is not clear how to break down the “organizational silos” view of risk management (RM) that is commonly taken. As a result, there has been relatively little work on finding the relationships between RM and KM. We have been doing research for the last couple of years on the identification of relationships between these two disciplines. At ECKM 2007 we presented a general review of the literature(s) and some hypotheses for starting research on KM and its relationship to the perceived value of enterprise risk management. This article presents findings based on our preliminary analyses, concentrating on those factors affecting the perceived quality of risk knowledge sharing. These come from a questionnaire survey of RM employees in organisations in the financial services sector, which yielded 121 responses. We have included five explanatory variables for the perceived quality of risk knowledge sharing. These comprised two variables relating to people (organizational capacity for work coordination and perceived quality of communication among groups), one relating to process (perceived quality of risk control) and two related to technology (web channel functionality and RM information system functionality). Our findings so far are that four of these five variables have a significant positive association with the perceived quality of risk knowledge sharing: contrary to expectations, web channel functionality did not have a significant association. Indeed, in some of our exploratory regression studies its coefficient (although not significant) was negative. In stepwise regression, the variable organizational capacity for work coordination accounted for by far the largest part of the variation in the dependent variable perceived quality of risk knowledge sharing. The “people” variables thus appear to have the greatest influence on the perceived quality of risk knowledge sharing, even in a sector that relies heavily on technology and on quantitative approaches to decision making. We have also found similar results with the dependent variable perceived value of Enterprise Risk Management (ERM) implementation.
Resumo:
Knowledge management needs to consider the three related elements of people, processes and technology. Much existing work has concentrated on either people or technology, often to the exclusion of the other two elements. Yet without thinking about process ? the way people, organisations and even technology actually do things ? any implementation of a knowledge management initiative is at best risky, and at worst doomed to failure. This paper looks at various ways in which a process view has appeared, expl icitly or implicitly, in knowledge management research and practice so far, and reflects on how more 'thinking about process' might improve knowledge management in the future. Consistent with this overall viewpoint, the issues generally centre less on wha t a process view would suggest should be done, but rather on the way that it would be implemented in practice.
Resumo:
We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.
Resumo:
Within project Distributed eLearning Center (DeLC) we are developing a system for distance and eLearning, which offers fixed and mobile access to electronic content and services. Mobile access is based on InfoStation architecture, which provides Bluetooth and WiFi connectivity. On InfoStation network we are developing multi-agent middleware that provides context-aware, adaptive and personalized access to the mobile services to the users. For more convenient testing and optimization of the middleware a simulation environment, called CA3 SiEnv, is being created.
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.