388 resultados para automatic content extraction
Resumo:
The Automated Estimator and LCADesign are two early examples of nD modelling software which both rely on the extraction of quantities from CAD models to support their further processing. The issues of building information modelling (BIM), quantity takeoff for different purposes and automating quantity takeoff are discussed by comparing the aims and use of the two programs. The technical features of the two programs are also described. The technical issues around the use of 3D models is described together with implementation issues and comments about the implementation of the IFC specifications. Some user issues that emerged through the development process are described, with a summary of the generic research tasks which are necessary to fully support the use of BIM and nD modelling.
Resumo:
Alvin Toffler’s image of the prosumer (1970, 1980, 1990) continues to influence in a significant way our understanding of the user-led, collaborative processes of content creation which are today labelled “social media” or “Web 2.0”. A closer look at Toffler’s own description of his prosumer model reveals, however, that it remains firmly grounded in the mass media age: the prosumer is clearly not the self-motivated creative originator and developer of new content which can today be observed in projects ranging from open source software through Wikipedia to Second Life, but simply a particularly well-informed, and therefore both particularly critical and particularly active, consumer. The highly specialised, high end consumers which exist in areas such as hi-fi or car culture are far more representative of the ideal prosumer than the participants in non-commercial (or as yet non-commercial) collaborative projects. And to expect Toffler’s 1970s model of the prosumer to describe these 21st-century phenomena was always an unrealistic expectation, of course. To describe the creative and collaborative participation which today characterises user-led projects such as Wikipedia, terms such as ‘production’ and ‘consumption’ are no longer particularly useful – even in laboured constructions such as ‘commons-based peer-production’ (Benkler 2006) or ‘p2p production’ (Bauwens 2005). In the user communities participating in such forms of content creation, roles as consumers and users have long begun to be inextricably interwoven with those as producer and creator: users are always already also able to be producers of the shared information collection, regardless of whether they are aware of that fact – they have taken on a new, hybrid role which may be best described as that of a produser (Bruns 2008). Projects which build on such produsage can be found in areas from open source software development through citizen journalism to Wikipedia, and beyond this also in multi-user online computer games, filesharing, and even in communities collaborating on the design of material goods. While addressing a range of different challenges, they nonetheless build on a small number of universal key principles. This paper documents these principles and indicates the possible implications of this transition from production and prosumption to produsage.
Resumo:
Buildings consume resources and energy, contribute to pollution of our air, water and soil, impact the health and well-being of populations and constitute an important part of the built environment in which we live. The ability to assess their design with a view to reducing that impact automatically from their 3D CAD representations enables building design professionals to make informed decisions on the environmental impact of building structures. Contemporary 3D object-oriented CAD files contain a wealth of building information. LCADesign has been designed as a fully integrated approach for automated eco-efficiency assessment of commercial buildings direct from 3D CAD. LCADesign accesses the 3D CAD detail through Industry Foundation Classes (IFCs) - the international standard file format for defining architectural and constructional CAD graphic data as 3D real-world objects - to permit construction professionals to interrogate these intelligent drawing objects for analysis of the performance of a design. The automated take-off provides quantities of all building components whose specific production processes, logistics and raw material inputs, where necessary, are identified to calculate a complete list of quantities for all products such as concrete, steel, timber, plastic etc and combines this information with the life cycle inventory database, to estimate key internationally recognised environmental indicators such as CML, EPS and Eco-indicator 99. This paper outlines the key modules of LCADesign and their role in delivering an automated eco-efficiency assessment for commercial buildings.
Resumo:
Angetrieben und unterstützt durch Web-2.0-Technologien, gibt es heute einen Trend zur Verbindung der Nutzung und Produktion von Inhalten als Produtzung (engl. produsage ). Um dabei die Qualität der erstellten Inhalte und eine nachhaltige Teilnahme der Nutzer sicherzustellen, müsen vier grundlegende Prinzipien eingehalten werden: * Größtmögliche Offenheit. * Ankurbeln der Gemeinschaft durch Inhalte und Hilfsmittel. * Unterstützung der Gruppendynamik und Abtretung von Verantwortung. * Keine Ausbeutung der Gemeinschaft und ihrer Arbeit.
Resumo:
Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.
Resumo:
China’s accession to the World Trade Organisation (WTO) has greatly enhanced global interest in investment in the Chinese media market, where demand for digital content is growing rapidly. The East Asian region is positioned as a growth area in many forms of digital content and digital service industries. China is attempting to catch up and take its place as a production centre to offset challenges from neighbouring countries. Meanwhile, Taiwan is seeking to use China both as an export market and as a production site for its digital content. This research investigates entry strategies of Taiwanese digital content firms into the Chinese market. By examining the strategies of a sample of Taiwan-based companies, this study also explores the evolution of their market strategies. However, the focus is on how distinctive business practices such as guanxi are important to Taiwanese business and to relations with Mainland China. This research examines how entrepreneurs manage the characteristics of digital content products and in turn how digital content entrepreneurs adapt to changing market circumstances. This project selected five Taiwan-based digital content companies that have business operations in China: Wang Film, Artkey, CnYES, Somode and iPartment. The study involved a field trip, undertaken between November 2006 and March 2007 to Shanghai and Taiwan to conduct interviews and to gather documentation and archival reports. Six senior managers and nine experts were interviewed. Data were analysed according to Miller’s firm-level entrepreneurship theory, foreign direct investment theory, Life Cycle Model and guanxi philosophy. Most studies of SMEs have focused on free market (capitalist) environments. In contrast, this thesis examines how Taiwanese digital content firms’ strategies apply in the Chinese market. I identified three main types of business strategy: cost-reduction, innovation and quality-enhancement; and four categories of functional strategies: product, marketing, resource acquisition and organizational restructuring. In this study, I introduce the concept of ‘entrepreneurial guanxi’, special relationships that imply mutual obligation, assurance and understanding to secure and exchange favors in entrepreneurial activities. While guanxi is a feature of many studies of business in Pan-Chinese society, it plays an important mediating role in digital content industries. In this thesis, I integrate the ‘Life Cycle Model’ with the dynamic concept of strategy. I outline the significant differences in the evolution of strategy between two types of digital content companies: off-line firms (Wang Film and Artkey) and web-based firms (CnYES, Somode and iPartment). Off-line digital content firms tended to adopt ‘resource acquisition strategies’ in their initial stages and ‘marketing strategies’ in second and subsequent stages. In contrast, web-based digital content companies mainly adopted product and marketing strategies in the early stages, and would adopt innovative approaches towards product and marketing strategies in the whole process of their business development. Some web-based digital content companies also adopted organizational restructuring strategies in the final stage. Finally, I propose the ‘Taxonomy Matrix of Entrepreneurial Strategies’ to emphasise the two dimensions of this matrix: innovation, and the firm’s resource acquisition for entrepreneurial strategy. This matrix is divided into four cells: Effective, Bounded, Conservative, and Impoverished.
Resumo:
In this third Quantum Interaction (QI) meeting it is time to examine our failures. One of the weakest elements of QI as a field, arises in its continuing lack of models displaying proper evolutionary dynamics. This paper presents an overview of the modern generalised approach to the derivation of time evolution equations in physics, showing how the notion of symmetry is essential to the extraction of operators in quantum theory. The form that symmetry might take in non-physical models is explored, with a number of viable avenues identified.
Resumo:
This is the 2nd of a series of discussion papers (Table 1) around the pedagogy of supervision in the technology disciplines. The papers form part of an Australian Learning and Teaching Council Fellowship program conducted by ALTC Associate Fellow, Professor Christine Bruce, Queensland University of Technology.
Resumo:
An informed citizenry is essential to the effective functioning of democracy. In most modern liberal democracies, citizens have traditionally looked to the media as the primary source of information about socio-political matters. In our increasingly mediated world, it is critical that audiences be able to effectively and accurately use the media to meet their information needs. Media literacy, the ability to access, understand, evaluate and create media content is therefore a vital skill for a healthy democracy. The past three decades have seen the rapid expansion of the information environment, particularly through Internet technologies. It is obvious that media usage patterns have changed dramatically as a result. Blogs and websites are now popular sources of news and information, and are for some sections of the population likely to be the first, and possibly only, information source accessed when information is required. What are the implications for media literacy in such a diverse and changing information environment? The Alexandria Manifesto stresses the link between libraries, a well informed citizenry and effective governance, so how do these changes impact on libraries? This paper considers the role libraries can play in developing media literate communities, and explores the ways in which traditional media literacy training may be expanded to better equip citizens for new media technologies. Drawing on original empirical research, this paper highlights a key shortcoming of existing media literacy approaches: that of overlooking the importance of needs identification as an initial step in media selection. Self-awareness of one’s actual information need is not automatic, as can be witnessed daily at reference desks in libraries the world over. Citizens very often do not know what it is that they need when it comes to information. Without this knowledge, selecting the most appropriate information source from the vast range available becomes an uncertain, possibly even random, enterprise. Incorporating reference interview-type training into media literacy education, whereby the individual will develop the skills to interrogate themselves regarding their underlying information needs, will enhance media literacy approaches. This increased focus on the needs of the individual will also push media literacy education into a more constructivist methodology. The paper also stresses the importance of media literacy training for adults. Media literacy education received in school or even university cannot be expected to retain its relevance over time in our rapidly evolving information environment. Further, constructivist teaching approaches highlight the importance of context to the learning process, thus it may be more effective to offer media literacy education relating to news media use to adults, whilst school-based approaches focus on types of media more relevant to young people, such as entertainment media. Librarians are ideally placed to offer such community-based media literacy education for adults. They already understand, through their training and practice of the reference interview, how to identify underlying information needs. Further, libraries are placed within community contexts, where the everyday practice of media literacy occurs. The Alexandria Manifesto stresses the link between libraries, a well informed citizenry and effective governance. It is clear that libraries have a role to play in fostering media literacy within their communities.
Resumo:
With the widespread applications of electronic learning (e-Learning) technologies to education at all levels, increasing number of online educational resources and messages are generated from the corresponding e-Learning environments. Nevertheless, it is quite difficult, if not totally impossible, for instructors to read through and analyze the online messages to predict the progress of their students on the fly. The main contribution of this paper is the illustration of a novel concept map generation mechanism which is underpinned by a fuzzy domain ontology extraction algorithm. The proposed mechanism can automatically construct concept maps based on the messages posted to online discussion forums. By browsing the concept maps, instructors can quickly identify the progress of their students and adjust the pedagogical sequence on the fly. Our initial experimental results reveal that the accuracy and the quality of the automatically generated concept maps are promising. Our research work opens the door to the development and application of intelligent software tools to enhance e-Learning.
Resumo:
Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.
Resumo:
The application of object-based approaches to the problem of extracting vegetation information from images requires accurate delineation of individual tree crowns. This paper presents an automated method for individual tree crown detection and delineation by applying a simplified PCNN model in spectral feature space followed by post-processing using morphological reconstruction. The algorithm was tested on high resolution multi-spectral aerial images and the results are compared with two existing image segmentation algorithms. The results demonstrate that our algorithm outperforms the other two solutions with the average accuracy of 81.8%.
Resumo:
Light Detection and Ranging (LIDAR) has great potential to assist vegetation management in power line corridors by providing more accurate geometric information of the power line assets and vegetation along the corridors. However, the development of algorithms for the automatic processing of LIDAR point cloud data, in particular for feature extraction and classification of raw point cloud data, is in still in its infancy. In this paper, we take advantage of LIDAR intensity and try to classify ground and non-ground points by statistically analyzing the skewness and kurtosis of the intensity data. Moreover, the Hough transform is employed to detected power lines from the filtered object points. The experimental results show the effectiveness of our methods and indicate that better results were obtained by using LIDAR intensity data than elevation data.