809 resultados para Planning of information systems
Resumo:
Drawing on the example of a recent study (Wang, 2010), this paper discusses the application of a sociocultural approach to information literacy research and curriculat design. First, it describes the foundation of this research approach in sociocultural theories, in particular Vygotsky's sociocultural theory. Then it presents key theoretical principles arising from the research and describes how the sociocultural approach enabled the establishment of collaborative partnerships between information professionals and academic and teaching support staff in a community of practice for information literacy integration.
Resumo:
Service-oriented Architectures (SOA) and Web services leverage the technical value of solutions in the areas of distributed systems and cross-enterprise integration. The emergence of Internet marketplaces for business services is driving the need to describe services, not only from a technical level, but also from a business and operational perspective. While, SOA and Web services reside in an IT layer, organizations owing Internet marketplaces are requiring advertising and trading business services which reside in a business layer. As a result, the gap between business and IT needs to be closed. This paper presents USDL (Unified Service Description Language), a specification language to describe services from a business, operational and technical perspective. USDL plays a major role in the Internet of Services to describe tradable services which are advertised in electronic marketplaces. The language has been tested using two service marketplaces as use cases.
Resumo:
The convergence of Internet marketplaces and service-oriented architectures has spurred the growth of Web service ecosystems. This paper articulates a vision for Web service ecosystems, discusses early manifestations of this vision, and presents a unifying architecture to support the emergence of larger and more sophisticated ecosystems
Resumo:
This paper summarises some of the recent studies on various types of learning approaches that have utilised some form of Web 2.0 services in curriculum design to enhance learning. A generic implementation model of this integration will then be presented to illustrate the overall learning implementation process. Recently, the integration of Web 2.0 technologies into learning curriculum has begun to get a wide acceptance among teaching instructors across various higher learning institutions. This is evidenced by numerous studies which indicate the implementation of a range of Web 2.0 technologies into their learning design to improve learning delivery. Moreover, recent studies also have shown that the ability of current students to embrace Web 2.0 technologies is better than students using existing learning technology. Despite various attempts made by teachers in relation to the integration, researchers have noted a lack of integration standard to help in curriculum design. The absence of this standard will restrict the capacity of Web 2.0 adaptation into learning and adding more the complexity to provide meaningful learning. Therefore, this paper will attempt to draw a conceptual integration model which is being generated to reflect how learning activities with some facilitation of Web 2.0 is currently being implemented. The design of this model is based on shared experiences by many scholars as well as feedback gathered from two separate surveys conducted on teachers and a group of 180 students. Furthermore, this paper also recognizes some key components that generally engage in the design of a Web 2.0 teaching and learning which need to be addressed accordingly. Overall, the content of this paper will be organised as follows. The first part of the paper will introduce the importance of Web 2.0 implementation in teaching and learning from the perspective of higher education institutions and those challenges surrounding this area. The second part summarizes related works done in this field and brings forward the concept of designing learning with the incorporation of Web 2.0 technology. The next part presents the results of analysis derived from the two student and teachers surveys on using Web 2.0 during learning activities. This paper concludes by presenting a model that reflects several key entities that may be involved during the learning design.
Resumo:
Transcending traditional national borders, the Internet is an evolving technology that has opened up many new international market opportunities. However, ambiguity remains, with limited research and understanding of how the Internet influences the firm’s internationalisation process components. As a consequence, there has been a call for further investigation of the phenomenon. Thus, the purpose of this study was to investigate the Internet’s impact on the internationalisation process components, specifically, information availability, information usage, interactive communication and international market growth. Analysis was undertaken using structural equation modelling. Findings highlight the mediating impact of the Internet on information and knowledge transference in the internationalisation process. Contributions of the study test conceptualisations and give statistical validation of interrelationships, while illuminating the Internet’s impact on firm internationalisation.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
As organizations reach higher levels of business process management maturity, they often find themselves maintaining very large process model repositories, representing valuable knowledge about their operations. A common practice within these repositories is to create new process models, or extend existing ones, by copying and merging fragments from other models. We contend that if these duplicate fragments, a.k.a. ex- act clones, can be identified and factored out as shared subprocesses, the repository’s maintainability can be greatly improved. With this purpose in mind, we propose an indexing structure to support fast detection of clones in process model repositories. Moreover, we show how this index can be used to efficiently query a process model repository for fragments. This index, called RPSDAG, is based on a novel combination of a method for process model decomposition (namely the Refined Process Structure Tree), with established graph canonization and string matching techniques. We evaluated the RPSDAG with large process model repositories from industrial practice. The experiments show that a significant number of non-trivial clones can be efficiently found in such repositories, and that fragment queries can be handled efficiently.
Resumo:
Evidence exists that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication may stem from the fact that the repository describes variants of the same pro- cesses and/or because of copy/pasting activity throughout the lifetime of the repository. Previous work has put forward techniques for identifying duplicate fragments (clones) that can be refactored into shared subprocesses. However, these techniques are limited to finding exact clones. This paper analyzes the prob- lem of approximate clone detection and puts forward two techniques for detecting clusters of approximate clones. Experiments show that the proposed techniques are able to accurately retrieve clusters of approximate clones that originate from copy/pasting followed by independent modifications to the copied fragments.
Resumo:
In order to make good decisions about the design of information systems, an essential skill is to understand process models of the business domain the system is intended to support. Yet, little knowledge to date has been established about the factors that affect how model users comprehend the content of process models. In this study, we use theories of semiotics and cognitive load to theorize how model and personal factors influence how model viewers comprehend the syntactical information of process models. We then report on a four-part series of experiments, in which we examined these factors. Our results show that additional semantical information impedes syntax comprehension, and that theoretical knowledge eases syntax comprehension. Modeling experience further contributes positively to comprehension efficiency, measured as the ratio of correct answers to the time taken to provide answers. We discuss implications for practice and research.
Resumo:
Facial expression is one of the main issues of face recognition in uncontrolled environments. In this paper, we apply the probabilistic linear discriminant analysis (PLDA) method to recognize faces across expressions. Several PLDA approaches are tested and cross-evaluated on the Cohn-Kanade and JAFFE databases. With less samples per gallery subject, high recognition rates comparable to previous works have been achieved indicating the robustness of the approaches. Among the approaches, the mixture of PLDAs has demonstrated better performances. The experimental results also indicate that facial regions around the cheeks, eyes, and eyebrows are more discriminative than regions around the mouth, jaw, chin, and nose.
Resumo:
Consider the concept combination ‘pet human’. In word association experiments, human subjects produce the associate ‘slave’ in relation to this combination. The striking aspect of this associate is that it is not produced as an associate of ‘pet’, or ‘human’ in isolation. In other words, the associate ‘slave’ seems to be emergent. Such emergent associations sometimes have a creative character and cognitive science is largely silent about how we produce them. Departing from a dimensional model of human conceptual space, this article will explore concept combinations, and will argue that emergent associations are a result of abductive reasoning within conceptual space, that is, below the symbolic level of cognition. A tensor-based approach is used to model concept combinations allowing such combinations to be formalized as interacting quantum systems. Free association norm data is used to motivate the underlying basis of the conceptual space. It is shown by analogy how some concept combinations may behave like quantum-entangled (non-separable) particles. Two methods of analysis were presented for empirically validating the presence of non-separable concept combinations in human cognition. One method is based on quantum theory and another based on comparing a joint (true theoretic) probability distribution with another distribution based on a separability assumption using a chi-square goodness-of-fit test. Although these methods were inconclusive in relation to an empirical study of bi-ambiguous concept combinations, avenues for further refinement of these methods are identified.
Resumo:
As computers approach the physical limits of information storable in memory, new methods will be needed to further improve information storage and retrieval. We propose a quantum inspired vector based approach, which offers a contextually dependent mapping from the subsymbolic to the symbolic representations of information. If implemented computationally, this approach would provide exceptionally high density of information storage, without the traditionally required physical increase in storage capacity. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors’ Conceptual Space approach and Humphreys et al.’s matrix model of memory.
Resumo:
Having IT-related capabilities is not enough to secure value from the IT resources and survive in today’s competitive environment. IT resources evolve dynamically and organisations must sustain their existing capabilities to continue to leverage value from their IT resources. Organisations’ IT-related management capabilities are an important source of their competitive advantage. We suggest that organisations can sustain these capabilities through appropriate considerations of resources at the technology-use level. This study suggests that an appropriate organisational design relating to decision rights and work environment, and a congruent reward system can create a dynamic IT-usage environment. This environment will be a vital source of knowledge that could help organisations to sustain their IT-related management capabilities. Analysis of data collected from a field survey demonstrates that this dynamic IT-usage environment, a result of the synergy between complementary factors, helps organisations to sustain their IT-related management capabilities. This study adds an important dimension to understanding why some organisations continue to perform better with their IT resources than others. For practice, this study suggests that organisations need to consider a comprehensive approach to what constitutes their valuable resources.
Resumo:
At St Thomas' Hospital, we have developed a computer program on a Titan graphics supercomputer to plan the stereotactic implantation of iodine-125 seeds for the palliative treatment of recurrent malignant gliomas. Use of the Gill-Thomas-Cosman relocatable frame allows planning and surgery to be carried out at different hospitals on different days. Stereotactic computed tomography (CT) and positron emission tomography (PET) scans are performed and the images transferred to the planning computer. The head, tumour and frame fiducials are outlined on the relevant images, and a three-dimensional model generated. Structures which could interfere with the surgery or radiotherapy, such as major vessels, shunt tubing etc., can also be outlined and included in the display. Catheter target and entry points are set using a three-dimensional cursor controlled by a set of dials attached to the computer. The program calculates and displays the radiation dose distribution within the target volume for various catheter and seed arrangements. The CT co-ordinates of the fiducial rods are used to convert catheter co-ordinates from CT space to frame space and to calculate the catheter insertion angles and depths. The surgically implanted catheters are after-loaded the next day and the seeds left in place for between 4 and 6 days, giving a nominal dose of 50 Gy to the edge of the target volume. 25 patients have been treated so far.
Resumo:
Exploring information use within everyday or community contexts is a recent area of interest for information literacy research endeavors. Within this domain, health information literacy (HIL) has emerged as a focus of interest due to identified synergies between information use and health status. However, while HIL has been acknowledged as a core ingredient that can assist people to take responsibility for managing and improving their own health, limited research has explored how HIL is experienced in everyday community life. This article will present the findings of ongoing research undertaken using phenomenography to explore how HIL is experienced among older Australians within everyday contexts. It will also discuss how these findings may be used to inform policy formulation in health communication and as an evidence base for the design and delivery of consumer health information resources and services.