492 resultados para Visual Knowledge Engineering
Resumo:
For over half a century art directors within the advertising industry have been adapting to the changes occurring in media, culture and the corporate sector, toward enhancing professional performance and competitiveness. These professionals seldom offer explicit justification about the role images play in effective communication. It is uncertain how this situation affects advertising performance, because advertising has, nevertheless, evolved in parallel to this as an industry able to fabricate new opportunities for itself. However, uncertainties in the formalization of art direction knowledge restrict the possibilities of knowledge transfer in higher education. The theoretical knowledge supporting advertising art direction has been adapted spontaneously from disciplines that rarely focus on specific aspects related to the production of advertising content, like, for example: marketing communication, design, visual communication, or visual art. Meanwhile, in scholarly research, vast empirical knowledge has been generated about advertising images, but often with limited insight into production expertise. Because art direction is understood as an industry practice and not as an academic discipline, an art direction perspective in scholarly contributions is rare. Scholarly research that is relevant to art direction seldom offers viewpoints to help understand how it is that research outputs may specifically contribute to art direction practices. This thesis is dedicated to formally understanding the knowledge underlying art direction and using it to explore models for visual analysis and knowledge transfer in higher education. The first three chapters of this thesis offer, firstly, a review of practical and contextual aspects that help define art direction, as a profession and as a component in higher education; secondly, a discussion about visual knowledge; and thirdly, a literature review of theoretical and analytic aspects relevant to art direction knowledge. Drawing on these three chapters, this thesis establishes explicit structures to help in the development of an art direction curriculum in higher education programs. Following these chapters, this thesis explores a theoretical combination of the terms ‘aesthetics’ and ‘strategy’ as foundational notions for the study of art direction. The theoretical exploration of the term ‘strategic aesthetics’ unveils the potential for furthering knowledge in visual commercial practices in general. The empirical part of this research explores ways in which strategic aesthetics notions can extend to methodologies of visual analysis. Using a combination of content analysis and of structures of interpretive analysis offered in visual anthropology, this research discusses issues of methodological appropriation as it shifts aspects of conventional methodologies to take into consideration paradigms of research that are producer-centred. Sampled out of 2759 still ads from the online databases of Cannes Lions Festival, this study uses an instrumental case study of love-related advertising to facilitate the analysis of content. This part of the research helps understand the limitations and functionality of the theoretical and methodological framework explored in the thesis. In light of the findings and discussions produced throughout the thesis, this project aims to provide directions for higher education in relation to art direction and highlights potential pathways for further investigation of strategic aesthetics.
Resumo:
Retrieving information from Twitter is always challenging due to its large volume, inconsistent writing and noise. Most existing information retrieval (IR) and text mining methods focus on term-based approach, but suffers from the problems of terms variation such as polysemy and synonymy. This problem deteriorates when such methods are applied on Twitter due to the length limit. Over the years, people have held the hypothesis that pattern-based methods should perform better than term-based methods as it provides more context, but limited studies have been conducted to support such hypothesis especially in Twitter. This paper presents an innovative framework to address the issue of performing IR in microblog. The proposed framework discover patterns in tweets as higher level feature to assign weight for low-level features (i.e. terms) based on their distributions in higher level features. We present the experiment results based on TREC11 microblog dataset and shows that our proposed approach significantly outperforms term-based methods Okapi BM25, TF-IDF and pattern based methods, using precision, recall and F measures.
Resumo:
Association rule mining is one technique that is widely used when querying databases, especially those that are transactional, in order to obtain useful associations or correlations among sets of items. Much work has been done focusing on efficiency, effectiveness and redundancy. There has also been a focusing on the quality of rules from single level datasets with many interestingness measures proposed. However, with multi-level datasets now being common there is a lack of interestingness measures developed for multi-level and cross-level rules. Single level measures do not take into account the hierarchy found in a multi-level dataset. This leaves the Support-Confidence approach, which does not consider the hierarchy anyway and has other drawbacks, as one of the few measures available. In this chapter we propose two approaches which measure multi-level association rules to help evaluate their interestingness by considering the database’s underlying taxonomy. These measures of diversity and peculiarity can be used to help identify those rules from multi-level datasets that are potentially useful.
Resumo:
Although there are many approaches for developing secure programs, they are not necessarily helpful for evaluating the security of a pre-existing program. Software metrics promise an easy way of comparing the relative security of two programs or assessing the security impact of modifications to an existing one. Most studies in this area focus on high level source code but this approach fails to take compiler-specific code generation into account. In this work we describe a set of object-oriented Java bytecode security metrics which are capable of assessing the security of a compiled program from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security using a tool which we have developed to automatically measure the security of a given Java bytecode program in terms of the accessibility of distinguished ‘classified’ attributes.
Resumo:
It has been shown that active control of locomotion increases accuracy and precision of nonvisual space perception, but psychological mechanisms of this enhancement are poorly understood. The present study explored a hypothesis that active control of locomotion enhances space perception by facilitating crossmodal interaction between visual and nonvisual spatial information. In an experiment, blindfolded participants walked along a linear path under one of the following two conditions: (1) They walked by themselves following a guide rope; and (2) they were led by an experimenter. Subsequently, they indicated the walked distance by tossing a beanbag to the origin of locomotion. The former condition gave participants greater control of their locomotion, and thus represented a more active walking condition. In addition, before each trial, half the participants viewed the room in which they performed the distance perception task. The other half remained blindfolded throughout the experiment. Results showed that although the room was devoid of any particular cues for walked distances, visual knowledge of the surroundings improved the precision of nonvisual distance perception. Importantly, however, the benefit of preview was observed only when participants walked more actively. This indicates that active control of locomotion allowed participants to better utilize their visual memory of the environment for perceiving nonvisually encoded distance, suggesting that active control of locomotion served as a catalyst for integrating visual and nonvisual information to derive spatial representations of higher quality.
Resumo:
This research draws on theories of emergence to inform the creation of an artistic and direct visualization. This is an interactive artwork and drawing tool for creative participant experiences. Emergence is characteristically creative and many different models of emergence exist. It is therefore possible to effect creativity through the application of emergence mechanisms from these different disciplines. A review of theories of emergence and examples of visualization in the arts, is provided. An art project led by the author is then discussed in this context. This project, Iterative Intersections, is a collaboration with community artists from Cerebral Palsy League. It has resulted in a number of creative outcomes including the interactive art application, Of me with me. Analytical discussion of this work shows how its construction draws on aspects of experience design, fractal and emergent theory to effect perceptual emergence and creative experience as well as to facilitate self-efficacy.
Resumo:
Organisations use Enterprise Architecture (EA) to reduce organisational complexity, improve communication, align business and information technology (IT), and drive organisational change. Due to the dynamic nature of environmental and organisational factors, EA descriptions need to change over time to keep providing value for its stakeholders. Emerging business and IT trends, such as Service-Oriented Architecture (SOA), may impact EA frameworks, methodologies, governance and tools. However, the phenomenon of EA evolution is still poorly understood. Using Archer's morphogenetic theory as a foundation, this research conceptualises three analytical phases of EA evolution in organisations, namely conditioning, interaction and elaboration. Based on a case study with a government agency, this paper provides new empirically and theoretically grounded insights into EA evolution, in particular in relation to the introduction of SOA, and describes relevant generative mechanisms affecting EA evolution. By doing so, it builds a foundation to further examine the impact of other IT trends such as mobile or cloud-based solutions on EA evolution. At a practical level, the research delivers a model that can be used to guide professionals to manage EA and continually evolve it.
Resumo:
The topic of designers’ knowledge and how they conduct design process has been widely investigated in design research. Understanding theoretical and experiential knowledge in design has involved recognition of the importance of designers’ experience of experiencing, seeing, and absorbing ideas from the world as points of reference (or precedents) that are consulted whenever a design problem arises (Lawson, 2004). Hence, various types of design knowledge have been categorized (Lawson, 2004), and the nature of design knowledge continues to be studied (Cross, 2006); nevertheless, the study of the experiential aspects embedded in design knowledge is a topic not fully addressed. In particular there has been little emphasis on the investigation of the ways in which designers’ individual experience influences different types of design tasks. This research focuses on the investigation of the ways in which designers inform a usability design process. It aims to understand how designers design product usability, what informs their process, and the role their individual experience (and episodic knowledge) plays within the design process. This paper introduces initial outcomes from an empirical study involving observation of a design task that emphasized usability issues. It discusses the experiential knowledge observed in the visual representations (sketches) produced by designers as part of the design tasks. Through the use of visuals as means to represent experiential knowledge, this paper presents initial research outcomes to demonstrate how designers’ individual experience is integrated into design tasks and communicated within the design process. Initial outcomes demonstrate the influence of designers’ experience in the design of product usability. It is expected that outcomes will help identify the causal relationships between experience, context of use, and product usability, which will contribute to enhance our understanding about the design of user-product interactions.
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This final report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This Industry focused report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.
Resumo:
In this paper we discuss our current efforts to develop and implement an exploratory, discovery mode assessment item into the total learning and assessment profile for a target group of about 100 second level engineering mathematics students. The assessment item under development is composed of 2 parts, namely, a set of "pre-lab" homework problems (which focus on relevant prior mathematical knowledge, concepts and skills), and complementary computing laboratory exercises which are undertaken within a fixed (1 hour) time frame. In particular, the computing exercises exploit the algebraic manipulation and visualisation capabilities of the symbolic algebra package MAPLE, with the aim of promoting understanding of certain mathematical concepts and skills via visual and intuitive reasoning, rather than a formal or rigorous approach. The assessment task we are developing is aimed at providing students with a significant learning experience, in addition to providing feedback on their individual knowledge and skills. To this end, a noteworthy feature of the scheme is that marks awarded for the laboratory work are primarily based on the extent to which reflective, critical thinking is demonstrated, rather than the amount of CBE-style tasks completed by the student within the allowed time. With regard to student learning outcomes, a novel and potentially critical feature of our scheme is that the assessment task is designed to be intimately linked to the overall course content, in that it aims to introduce important concepts and skills (via individual student exploration) which will be revisited somewhat later in the pedagogically more restrictive formal lecture component of the course (typically a large group plenary format). Furthermore, the time delay involved, or "incubation period", is also a deliberate design feature: it is intended to allow students the opportunity to undergo potentially important internal re-adjustments in their understanding, before being exposed to lectures on related course content which are invariably delivered in a more condensed, formal and mathematically rigorous manner. In our presentation, we will discuss in more detail our motivation and rationale for trailing such a scheme for the targeted student group. Some of the advantages and disadvantages of our approach (as we perceived them at the initial stages) will also be enumerated. In a companion paper, the theoretical framework for our approach will be more fully elaborated, and measures of student learning outcomes (as obtained from eg. student provided feedback) will be discussed.
Resumo:
This paper presents a vision-based method of vehicle localisation that has been developed and tested on a large forklift type robotic vehicle which operates in a mainly outdoor industrial setting. The localiser uses a sparse 3D edgemap of the environment and a particle filter to estimate the pose of the vehicle. The vehicle operates in dynamic and non-uniform outdoor lighting conditions, an issue that is addressed by using knowledge of the scene to intelligently adjust the camera exposure and hence improve the quality of the information in the image. Results from the industrial vehicle are shown and compared to another laser-based localiser which acts as a ground truth. An improved likelihood metric, using peredge calculation, is presented and has shown to be 40% more accurate in estimating rotation. Visual localization results from the vehicle driving an arbitrary 1.5km path during a bright sunny period show an average position error of 0.44m and rotation error of 0.62deg.