469 resultados para Model information
em Queensland University of Technology - ePrints Archive
Resumo:
Alexander’s Ecological Dominance and Social Competition (EDSC) model currently provides the most comprehensive overview of human traits in the development of a theory of human evolution and sociality (Alexander, 1990; Finn, Geary & Ward, 2005; Irons, 2005). His model provides a basis for explaining the evolution of human socio-cognitive abilities. Our paper examines the extension of Alexander’s model to incorporate the human trait of information behavior in synergy with ecological dominance and social competition as a human socio-cognitive competence. This paper discusses the various interdisciplinary perspectives exploring how evolution has shaped information behavior and why information behavior is emerging as an important human socio-cognitive competence. This paper outlines these issues, including the extension of Spink and Currier’s (2006a,b) evolution of information behavior model towards a more integrated understanding of how information behaviors have evolved (Spink & Cole, 2006).
Resumo:
This paper presents the results from a study of information behaviors in the context of people's everyday lives undertaken in order to develop an integrated model of information behavior (IB). 34 participants from across 6 countries maintained a daily information journal or diary – mainly through a secure web log – for two weeks, to an aggregate of 468 participant days over five months. The text-rich diary data was analyzed using a multi-method qualitative-quantitative analysis in the following order: Grounded Theory analysis with manual coding, automated concept analysis using thesaurus-based visualization, and finally a statistical analysis of the coding data. The findings indicate that people engage in several information behaviors simultaneously throughout their everyday lives (including home and work life) and that sense-making is entangled in all aspects of them. Participants engaged in many of the information behaviors in a parallel, distributed, and concurrent fashion: many information behaviors for one information problem, one information behavior across many information problems, and many information behaviors concurrently across many information problems. Findings indicate also that information avoidance – both active and passive avoidance – is a common phenomenon and that information organizing behaviors or the lack thereof caused the most problems for participants. An integrated model of information behaviors is presented based on the findings.
Resumo:
This project was a step forward in developing and evaluating a novel, mathematical model that can deduce the meaning of words based on their use in language. This model can be applied to a wide range of natural language applications, including the information seeking process most of us undertake on a daily basis.
Resumo:
The two longitudinal case studies that make up this dissertation sought to explain and predict the relationship between usability and clinician acceptance of a health information system. The overall aim of the research study was to determine what role usability plays in the acceptance or rejection of systems used by clinicians in a healthcare context. The focus was on the end users (the clinicians) rather than the views of the system designers and managers responsible for implementation and the clients of the clinicians. A mixed methods approach was adopted that drew on both qualitative and quantitative research methods. This study followed the implementation of a community health information system from early beginnings to its established practice. Users were drawn from different health service departments with distinctly different organisational cultures and attitudes to information and communication technology used in this context. This study provided evidence that a usability analysis in this context would not necessarily be valid when the users have prior reservations on acceptance. Investigation was made on the initial training and post-implementation support together with a study on the nature of the clinicians to determine factors that may influence their attitude. This research identified that acceptance of a system is not necessarily a measure of its quality, capability and usability, is influenced by the user’s attitude which is determined by outside factors, and the nature and quality of training. The need to recognise the limitations of the current methodologies for analysing usability and acceptance was explored to lay the foundations for further research.
Resumo:
Authorised users (insiders) are behind the majority of security incidents with high financial impacts. Because authorisation is the process of controlling users’ access to resources, improving authorisation techniques may mitigate the insider threat. Current approaches to authorisation suffer from the assumption that users will (can) not depart from the expected behaviour implicit in the authorisation policy. In reality however, users can and do depart from the canonical behaviour. This paper argues that the conflict of interest between insiders and authorisation mechanisms is analogous to the subset of problems formally studied in the field of game theory. It proposes a game theoretic authorisation model that can ensure users’ potential misuse of a resource is explicitly considered while making an authorisation decision. The resulting authorisation model is dynamic in the sense that its access decisions vary according to the changes in explicit factors that influence the cost of misuse for both the authorisation mechanism and the insider.
Resumo:
We examine which capabilities technologies provide to support collaborative process modeling. We develop a model that explains how technology capabilities impact cognitive group processes, and how they lead to improved modeling outcomes and positive technology beliefs. We test this model through a free simulation experiment of collaborative process modelers structured around a set of modeling tasks. With our study, we provide an understanding of the process of collaborative process modeling, and detail implications for research and guidelines for the practical design of collaborative process modeling.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
In order to execute, study, or improve operating procedures, companies document them as business process models. Often, business process analysts capture every single exception handling or alternative task handling scenario within a model. Such a tendency results in large process specifications. The core process logic becomes hidden in numerous modeling constructs. To fulfill different tasks, companies develop several model variants of the same business process at different abstraction levels. Afterwards, maintenance of such model groups involves a lot of synchronization effort and is erroneous. We propose an abstraction technique that allows generalization of process models. Business process model abstraction assumes a detailed model of a process to be available and derives coarse-grained models from it. The task of abstraction is to tell significant model elements from insignificant ones and to reduce the latter. We propose to learn insignificant process elements from supplementary model information, e.g., task execution time or frequency of task occurrence. Finally, we discuss a mechanism for user control of the model abstraction level – an abstraction slider.
Resumo:
In this paper, we explore how BIM functionalities together with novel management concepts and methods have been utilized in thirteen hospital projects in the United States and the United Kingdom. Secondary data collection and analysis were used as the method. Initial findings indicate that the utilization of BIM enables a holistic view of project delivery and helps to integrate project parties into a collaborative process. The initiative to implement BIM must come from the top down to enable early involvement of all key stakeholders. It seems that it is rather resistance from people to adapt to the new way of working and thinking than immaturity of technology that hinders the utilization of BIM.
Resumo:
The world of Construction is changing, so too are the expectations of stakeholders regarding strategies for adapting existing resources (people, equipment and finances), processes and tools to the evolving needs of the industry. Building Information Modelling (BIM) is a data-rich, digital approach for representing building information required for design and construction. BIM tools play a crucial role and are instrumental to current approaches, by industry stakeholders, aimed at harnessing the power of a single information repository for improved project delivery and maintenance. Yet, building specifications - which document information on material quality, and workmanship requirements - remain distinctly separate from model information typically represented in BIM models. BIM adoption for building design, construction and maintenance is an industry-wide strategy aimed at addressing such concerns about information fragmentation. However, to effectively reduce inefficiencies due to fragmentation, BIM models require crucial building information contained in specifications. This paper profiles some specification tools which have been used in industry as a means of bridging the BIM-Specifications divide. We analyse the distinction between current attempts at integrating BIM and specifications and our approach which utilizes rich specification information embedded within objects in a product library as a method for improving the quality of information contained in BIM objects at various levels of model development.
Resumo:
This paper conceptualizes a framework for bridging the BIM (building information modelling)-specifications divide through augmenting objects within BIM with specification parameters derived from a product library. We demonstrate how model information, enriched with data at various LODs (levels of development), can evolve simultaneously with design and construction using different representation of a window object embedded in a wall as lifecycle phase exemplars at different levels of granularity. The conceptual standpoint is informed by the need for exploring a methodological approach which extends beyond current limitations of current modelling platforms in enhancing the information content of BIM models. Therefore, this work demonstrates that BIM objects can be augmented with construction specification parameters leveraging product libraries.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.
Resumo:
This thesis is a study for automatic discovery of text features for describing user information needs. It presents an innovative data-mining approach that discovers useful knowledge from both relevance and non-relevance feedback information. The proposed approach can largely reduce noises in discovered patterns and significantly improve the performance of text mining systems. This study provides a promising method for the study of Data Mining and Web Intelligence.
Resumo:
This paper conceptualizes a framework for bridging the BIM-Specifications divide by embedding project-specific information in BIM objects by means of a product library. We demonstrate how model information, enriched with data at various levels of development (LODs), can evolve simultaneously with design and construction using a window object embedded in a wall as life-cycle phase exemplars at different levels of granularity. The conceptual approach is informed by the need for exploring an approach that takes cognizance of the limitations of current modelling tools in enhancing the information content of BIM models. Therefore, this work attempts to answer the question, “How can the modelling of building information be enhanced throughout the life-cycle phases of buildings utilizing building specification information?”
Resumo:
This thesis addresses the topic of real-time decision making by driverless (autonomous) city vehicles, i.e. their ability to make appropriate driving decisions in non-simplified urban traffic conditions. After addressing the state of research, and explaining the research question, the thesis presents solutions for the subcomponents which are relevant for decision making with respect to information input (World Model), information output (Driving Maneuvers), and the real-time decision making process. TheWorld Model is a software component developed to fulfill the purpose of collecting information from perception and communication subsystems, maintaining an up-to-date view of the vehicle’s environment, and providing the required input information to the Real-Time Decision Making subsystem in a well-defined, and structured way. The real-time decision making process consists of two consecutive stages. While the first decision making stage uses a Petri net to model the safetycritical selection of feasible driving maneuvers, the second stage uses Multiple Criteria Decision Making (MCDM) methods to select the most appropriate driving maneuver, focusing on fulfilling objectives related to efficiency and comfort. The complex task of autonomous driving is subdivided into subtasks, called driving maneuvers, which represent the output (i.e. decision alternatives) of the real-time decision making process. Driving maneuvers are considered as implementations of closed-loop control algorithms, each capable of maneuvering the autonomous vehicle in a specific traffic situation. Experimental tests in both a 3D simulation and real-world experiments attest that the developed approach is suitable to deal with the complexity of real-world urban traffic situations.