113 resultados para artifical intelligence
Resumo:
A computational framework for enhancing design in an evolutionary approach with a dynamic hierarchical structure is presented in this paper. This framework can be used as an evolutionary kernel for building computer-supported design systems. It provides computational components for generating, adapting and exploring alternative design solutions at multiple levels of abstraction with hierarchically structured design representations. In this paper, preliminary experimental results of using this framework in several design applications are presented.
Resumo:
Facing with the difficulty in information propagation and synthesizing from conceptual to embodiment design, this paper introduces a function-oriented, axiom based conceptual modeling scheme. Default logic reasoning is exploited for recognition and reconstitution of conceptual product geometric and topological information. The proposed product modeling system and reasoning approach testify a methodology of "structural variation design", which is verified in the implementation of a GPAL (Green Product All Life-cycle) CAD system. The GPAL system includes major enhancement modules of a mechanism layout sketching method based on fuzzy logic, a knowledge-based function-to-form mapping mechanism and conceptual form reconstitution paradigm based on default geometric reasoning. A mechanical hand design example shows a more than 20 times increase in design efficacy with these enhancement modules in the GPAL system on a general 3D CAD platform.
Resumo:
Information and communication technologies (ICTs) had occupied their position on knowledge management and are now evolving towards the era of self-intelligence (Klosterman, 2001). In the 21st century ICTs for urban development and planning are imperative to improve the quality of life and place. This includes the management of traffic, waste, electricity, sewerage and water quality, monitoring fire and crime, conserving renewable resources, and coordinating urban policies and programs for urban planners, civil engineers, and government officers and administrators. The handling of tasks in the field of urban management often requires complex, interdisciplinary knowledge as well as profound technical information. Most of the information has been compiled during the last few years in the form of manuals, reports, databases, and programs. However frequently, the existence of these information and services are either not known or they are not readily available to the people who need them. To provide urban administrators and the public with comprehensive information and services, various ICTs are being developed. In early 1990s Mark Weiser (1993) proposed Ubiquitous Computing project at the Xerox Palo Alto Research Centre in the US. He provides a vision of a built environment which digital networks link individual residents not only to other people but also to goods and services whenever and wherever they need (Mitchell, 1999). Since then the Republic of Korea (ROK) has been continuously developed national strategies for knowledge based urban development (KBUD) through the agenda of Cyber Korea, E-Korea and U-Korea. Among abovementioned agendas particularly the U-Korea agenda aims the convergence of ICTs and urban space for a prosperous urban and economic development. U-Korea strategies create a series of U-cities based on ubiquitous computing and ICTs by a means of providing ubiquitous city (U-city) infrastructure and services in urban space. The goals of U-city development is not only boosting the national economy but also creating value in knowledge based communities. It provides opportunity for both the central and local governments collaborate to U-city project, optimize information utilization, and minimize regional disparities. This chapter introduces the Korean-led U-city concept, planning, design schemes and management policies and discusses the implications of U-city concept in planning for KBUD.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
In this paper we propose a method for vision only topological simultaneous localisation and mapping (SLAM). Our approach does not use motion or odometric information but a sequence of colour histograms from visited places. In particular, we address the perceptual aliasing problem which occurs using external observations only in topological navigation. We propose a Bayesian inference method to incrementally build a topological map by inferring spatial relations from the sequence of observations while simultaneously estimating the robot's location. The algorithm aims to build a small map which is consistent with local adjacency information extracted from the sequence measurements. Local adjacency information is incorporated to disambiguate places which otherwise would appear to be the same. Experiments in an indoor environment show that the proposed technique is capable of dealing with perceptual aliasing using visual observations only and successfully performs topological SLAM.
Resumo:
In recent decades a number of Australian artists and teacher/artists have given serious attention to the creation of performance forms and performance engagement models that respect children’s intelligence, engage with themes of relevance, avoid the cliche´s of children’s theatre whilst connecting both sincerely and playfully with current understandings of the way in which young children develop and engage with the world. Historically a majority of performing arts companies touring Australian schools or companies seeking schools to view a performance in a dedicated performance venue engage with their audiences in what can be called a ‘drop-in drop-out’ model. A six-month practice-led research project (The Tashi Project) which challenged the tenets of the ‘drop-in drop-out’ model has been recently undertaken by Sandra Gattenhof and Mark Radvan in conjunction with early childhood students from three Brisbane primary school classrooms who were positioned as co-researchers and co-artists. The children, researchers and performers worked in a complimentary relationship in both the artistic process and the development of product.
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This final report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.