378 resultados para production planning information systems
Resumo:
This paper discusses the areawide Dynamic ROad traffic NoisE (DRONE) simulator, and its implementation as a tool for noise abatement policy evaluation. DRONE involves integrating a road traffic noise estimation model with a traffic simulator to estimate road traffic noise in urban networks. An integrated traffic simulation-noise estimation model provides an interface for direct input of traffic flow properties from simulation model to noise estimation model that in turn estimates the noise on a spatial and temporal scale. The output from DRONE is linked with a geographical information system for visual representation of noise levels in the form of noise contour maps.
Resumo:
Separability is a concept that is very difficult to define, and yet much of our scientific method is implicitly based upon the assumption that systems can sensibly be reduced to a set of interacting components. This paper examines the notion of separability in the creation of bi-ambiguous compounds that is based upon the CHSH and CH inequalities. It reports results of an experiment showing that violations of the CHSH and CH inequality can occur in human conceptual combination.
Resumo:
Information security policy defines the governance and implementation strategy for information security in alignment with the corporate risk policy objectives and strategies. Research has established that alignment between corporate concerns may be enhanced when strategies are developed concurrently using the same development process as an integrative relationship is established. Utilizing the corporate risk management framework for security policy management establishes such an integrative relationship between information security and corporate risk management objectives and strategies. There is however limitation in the current literature on presenting a definitive approach that fully integrates security policy management with the corporate risk management framework. This paper presents an approach that adopts a conventional corporate risk management framework for security policy development and management to achieve alignment with the corporate risk policy. A case example is examined to illustrate the alignment achieved in each process step with a security policy structure being consequently derived in the process. It is shown that information security policy management outcomes become both integral drivers and major elements of the corporate-level risk management considerations. Further study should involve assessing the impact of the use of the proposed framework in enhancing alignment as perceived in this paper.
Resumo:
Real-world business processes are resource-intensive. In work environments human resources usually multitask, both human and non-human resources are typically shared between tasks, and multiple resources are sometimes necessary to undertake a single task. However, current Business Process Management Systems focus on task-resource allocation in terms of individual human resources only and lack support for a full spectrum of resource classes (e.g., human or non-human, application or non-application, individual or teamwork, schedulable or unschedulable) that could contribute to tasks within a business process. In this paper we develop a conceptual data model of resources that takes into account the various resource classes and their interactions. The resulting conceptual resource model is validated using a real-life healthcare scenario.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Informed learning: a pedagogical construct attending simultaneously to information use and learning.
Resumo:
The idea of informed learning, applicable in academic, workplace and community settings, has been derived largely from a program of phenomenographic research in the field of information literacy, which has illuminated the experience of using information to learn. Informed learning is about simultaneous attention to information use and learning, where both information and learning are considered to be relational; and is built upon a series of key concepts such as second–order perspective, simultaneity, awareness, and relationality. Informed learning also relies heavily on reflection as a strategy for bringing about learning. As a pedagogical construct, informed learning supports inclusive curriculum design and implementation. This paper reports aspects of the informed learning research agenda which are currently being pursued at the Queensland University of Technology (QUT). The first part elaborates the idea of informed learning, examines the key concepts underpinning this pedagogical construct, and explains its emergence from the research base of the QUT Information Studies research team. The second presents a case, which demonstrates the ongoing development of informed learning theory and practice, through the development of inclusive informed learning for a culturally diverse higher education context.
Resumo:
In sustainable development projects, as well as other types of projects, knowledge transfer is important for the organisations managing the project. Nevertheless, knowledge transfer among employees does not happen automatically and it has been found that the lack of social networks and the lack of trust among employees are the major barriers to effective knowledge transfer. Social network analysis has been recognised as a very important tool for improving knowledge transfer in the project environment. Transfer of knowledge is more effective where it depends heavily on social networks and informal dialogue. Based on the theory of social capital, social capital consists of two parts: conduits network and resource exchange network. This research studies the relationships among performance, the resource exchange network (such as the knowledge network) and the relationship network (such as strong ties network, energy network, and trust network) at the individual and project levels. The aim of this chapter is to present an approach to overcoming the lack of social networks and lack of trust to improve knowledge transfer within project-based organisations. This is to be done by identifying the optimum structure of relationship networks and knowledge networks within small and medium projects. The optimal structure of the relationship networks and knowledge networks is measured using two dimensions: intra-project and inter-project. This chapter also outlines an extensive literature review in the areas of social capital, knowledge management and project management, and presents the conceptual model of the research approach.
Resumo:
While critical success factors (CSFs) of enterprise system (ES) implementation are mature concepts and have received considerable attention for over a decade, researchers have very often focused on only a specific aspect of the implementation process or a specific CSF. Resultantly, there is (1) little research documented that encompasses all significant CSF considerations and (2) little empirical research into the important factors of successful ES implementation. This paper is part of a larger research effort that aims to contribute to understanding the phenomenon of ES CSFs, and reports on preliminary findings from a case study conducted at a Queensland University of Technology (QUT) in Australia. This paper reports on an empirically derived CSFs framework using a directed content analysis of 79 studies; from top IS outlets, employing the characteristics of the analytic theory, and from six different projects implemented at QUT.
Resumo:
This article introduces a “pseudo classical” notion of modelling non-separability. This form of non-separability can be viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.
Resumo:
Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.
Resumo:
The paper details the results of the first phase of an on-going research into the sociocultural factors that influence the supervision of higher degrees research (HDR) engineering students in the Faculty of Built Environment and Engineering (BEE) and Faculty of Science and Technology (FaST) at Queensland University of Technology. A quantitative analysis was performed on the results from an online survey that was administered to 179 engineering students. The study reveals that cultural barriers impact their progression and developing confidence in their research programs. We argue that in order to assist international and non-English speaking background (NESB) research students to triumph over such culturally embedded challenges in engineering research, it is important for supervisors to understand this cohort's unique pedagogical needs and develop intercultural sensitivity in their pedagogical practice in postgraduate research supervision. To facilitate this, the governing body (Office of Research) can play a vital role in not only creating the required support structures but also their uniform implementation across the board.
Resumo:
To date, much work has been done to examine the ways in which information literacy – a way of thinking about, existing alongside and working with information- functions in an academic setting. However, its role in the non-academic library professions has been largely ignored. Given that the public librarian is responsible for designing and delivering services and programmes aimed at supporting the information literacy needs of the community-at-large there is great value to be had from examining the ways in which public libraries understand and experience IL. The research described in this paper investigates, through the use of phenomenography; the ways in which public librarians understand and experience the concept of Information Literacy.
Resumo:
In recent years several scientific Workflow Management Systems (WfMSs) have been developed with the aim to automate large scale scientific experiments. As yet, many offerings have been developed, but none of them has been promoted as an accepted standard. In this paper we propose a pattern-based evaluation of three among the most widely used scientific WfMSs: Kepler, Taverna and Triana. The aim is to compare them with traditional business WfMSs, emphasizing the strengths and deficiencies of both systems. Moreover, a set of new patterns is defined from the analysis of the three considered systems.
Resumo:
At QUT research data refers to information that is generated or collected to be used as primary sources in the production of original research results, and which would be required to validate or replicate research findings (Callan, De Vine, & Baker, 2010). Making publicly funded research data discoverable by the broader research community and the public is a key aim of the Australian National Data Service (ANDS). Queensland University of Technology (QUT) has been innovating in this space by undertaking mutually dependant technical and content (metadata) focused projects funded by ANDS. Research Data Librarians identified and described datasets generated from Category 1 funded research at QUT, by interviewing researchers, collecting metadata and fashioning metadata records for upload to the Australian Research Data commons (ARDC) and exposure through the Research Data Australia interface. In parallel to this project, a Research Data Management Service and Metadata hub project were being undertaken by QUT High Performance Computing & Research Support specialists. These projects will collectively store and aggregate QUT’s metadata and research data from multiple repositories and administration systems and contribute metadata directly by OAI-PMH compliant feed to RDA. The pioneering nature of the work has resulted in a collaborative project dynamic where good data management practices and the discoverability and sharing of research data were the shared drivers for all activity. Each project’s development and progress was dependent on feedback from the other. The metadata structure evolved in tandem with the development of the repository and the development of the repository interface responded to meet the needs of the data interview process. The project environment was one of bottom-up collaborative approaches to process and system development which matched top-down strategic alliances crossing organisational boundaries in order to provide the deliverables required by ANDS. This paper showcases the work undertaken at QUT, focusing on the Seeding the Commons project as a case study, and illustrates how the data management projects are interconnected. It describes the processes and systems being established to make QUT research data more visible and the nature of the collaborations between organisational areas required to achieve this. The paper concludes with the Seeding the Commons project outcomes and the contribution this project made to getting more research data ‘out there’.