923 resultados para Knowledge Building
Resumo:
The UK government aims at achieving 80% CO2 emission reduction by 2050 which requires collective efforts across all the UK industry sectors. In particular, the housing sector has a large potential to contribute to achieving the aim because the housing sector alone accounts for 27% of the total UK CO2 emission, and furthermore, 87% of the housing which is responsible for current 27% CO2 emission will still stand in 2050. Therefore, it is essential to improve energy efficiency of existing housing stock built with low energy efficiency standard. In order for this, a whole‐house needs to be refurbished in a sustainable way by considering the life time financial and environmental impacts of a refurbished house. However, the current refurbishment process seems to be challenging to generate a financially and environmentally affordable refurbishment solution due to the highly fragmented nature of refurbishment practice and a lack of knowledge and skills about whole‐house refurbishment in the construction industry. In order to generate an affordable refurbishment solution, diverse information regarding costs and environmental impacts of refurbishment measures and materials should be collected and integrated in right sequences throughout the refurbishment project life cycle among key project stakeholders. Consequently, various researchers increasingly study a way of utilizing Building Information Modelling (BIM) to tackle current problems in the construction industry because BIM can support construction professionals to manage construction projects in a collaborative manner by integrating diverse information, and to determine the best refurbishment solution among various alternatives by calculating the life cycle costs and lifetime CO2 performance of a refurbishment solution. Despite the capability of BIM, the BIM adoption rate is low with 25% in the housing sector and it has been rarely studied about a way of using BIM for housing refurbishment projects. Therefore, this research aims to develop a BIM framework to formulate a financially and environmentally affordable whole‐house refurbishment solution based on the Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) methods simultaneously. In order to achieve the aim, a BIM feasibility study was conducted as a pilot study to examine whether BIM is suitable for housing refurbishment, and a BIM framework was developed based on the grounded theory because there was no precedent research. After the development of a BIM framework, this framework was examined by a hypothetical case study using BIM input data collected from questionnaire survey regarding homeowners’ preferences for housing refurbishment. Finally, validation of the BIM framework was conducted among academics and professionals by providing the BIM framework and a formulated refurbishment solution based on the LCC and LCA studies through the framework. As a result, BIM was identified as suitable for housing refurbishment as a management tool, and it is timely for developing the BIM framework. The BIM framework with seven project stages was developed to formulate an affordable refurbishment solution. Through the case study, the Building Regulation is identified as the most affordable energy efficiency standard which renders the best LCC and LCA results when it is applied for whole‐house refurbishment solution. In addition, the Fabric Energy Efficiency Standard (FEES) is recommended when customers are willing to adopt high energy standard, and the maximum 60% of CO2 emissions can be reduced through whole‐house fabric refurbishment with the FEES. Furthermore, limitations and challenges to fully utilize BIM framework for housing refurbishment were revealed such as a lack of BIM objects with proper cost and environmental information, limited interoperability between different BIM software and limited information of LCC and LCA datasets in BIM system. Finally, the BIM framework was validated as suitable for housing refurbishment projects, and reviewers commented that the framework can be more practical if a specific BIM library for housing refurbishment with proper LCC and LCA datasets is developed. This research is expected to provide a systematic way of formulating a refurbishment solution using BIM, and to become a basis for further research on BIM for the housing sector to resolve the current limitations and challenges. Future research should enhance the BIM framework by developing more detailed process map and develop BIM objects with proper LCC and LCA Information.
Resumo:
The paper gives an overview about the ongoing FP6-IST INFRAWEBS project and describes the main layers and software components embedded in an application oriented realisation framework. An important part of INFRAWEBS is a Semantic Web Unit (SWU) – a collaboration platform and interoperable middleware for ontology-based handling and maintaining of SWS. The framework provides knowledge about a specific domain and relies on ontologies to structure and exchange this knowledge to semantic service development modules. INFRAWEBS Designer and Composer are sub-modules of SWU responsible for creating Semantic Web Services using Case-Based Reasoning approach. The Service Access Middleware (SAM) is responsible for building up the communication channels between users and various other modules. It serves as a generic middleware for deployment of Semantic Web Services. This software toolset provides a development framework for creating and maintaining the full-life-cycle of Semantic Web Services with specific application support.
Resumo:
The advances in building learning technology now have to emphasize on the aspect of the individual learning besides the popular focus on the technology per se. Unlike the common research where a great deal has been on finding ways to build, manage, classify, categorize and search knowledge on the server, there is an interest in our work to look at the knowledge development at the individual’s learning. We build the technology that resides behind the knowledge sharing platform where learning and sharing activities of an individual take place. The system that we built, KFTGA (Knowledge Flow Tracer and Growth Analyzer), demonstrates the capability of identifying the topics and subjects that an individual is engaged with during the knowledge sharing session and measuring the knowledge growth of the individual learning on a specific subject on a given time space.
Resumo:
* The work is partly supported by RFFI grant 08-07-00062-a
Resumo:
The paper explores the functionalities of eight start pages and considers their usefulness when used as a mashable platform for deployment of personal learning environments (PLE) for self-organized learners. The Web 2.0 effects and eLearning 2.0 strategies are examined from the point of view of how they influence the methods of gathering and capturing data, information and knowledge, and the learning process. Mashup technology is studied in order to see what kind of components can be used in PLE realization. A model of a PLE for self-organized learners is developed and it is used to prototype a personal learning and research environment in the start pages Netvibes, Pageflakes and iGoogle.
Resumo:
One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.
Resumo:
This research takes a dynamic view on the knowledge coordination process, aiming to explain how the process is affected by changes in the operating environment, from normal situations to emergencies in traditional and fast-response organizations, and why these changes occur. We first conceptualize the knowledge coordination process by distinguishing between four dimensions - what, when, how and who - that together capture the full scope of the knowledge coordination process. We use these dimensions to analyze knowledge coordination practices and the activities constituting these practices, in the IT functions of traditional and fast-response (military) organizations where we distinguish between "normal" and "emergency" operating conditions. Our findings indicate that (i) inter-relationships between knowledge coordination practices change under different operating conditions, and (ii) the patterns of change are different in traditional and fast-response organizations.
Resumo:
This paper presents a research of linguistic structure of Bulgarian bells knowledge. The idea of building semantic structure of Bulgarian bells appeared during the “Multimedia fund - BellKnow” project. In this project was collected a lots of data about bells, their structure, history, technical data, etc. This is the first attempt for computation linguistic explain of bell knowledge and deliver a semantic representation of that knowledge. Based on this research some linguistic components, aiming to realize different types of analysis of text objects are implemented in term dictionaries. Thus, we lay the foundation of the linguistic analysis services in these digital dictionaries aiding the research of kinds, number and frequency of the lexical units that constitute various bell objects.
Resumo:
We describe an ontological representation of data in an archive containing detailed description of church bells. As an object of cultural heritage the bell has general properties such as geometric dimensions, weight, sound of each of the bells, the pitch of the tone as well as acoustical diagrams obtained using contemporary equipment. We use Protégé platform in order to define basic ontological objects and relations between them.
Resumo:
Topic classification (TC) of short text messages offers an effective and fast way to reveal events happening around the world ranging from those related to Disaster (e.g. Sandy hurricane) to those related to Violence (e.g. Egypt revolution). Previous approaches to TC have mostly focused on exploiting individual knowledge sources (KS) (e.g. DBpedia or Freebase) without considering the graph structures that surround concepts present in KSs when detecting the topics of Tweets. In this paper we introduce a novel approach for harnessing such graph structures from multiple linked KSs, by: (i) building a conceptual representation of the KSs, (ii) leveraging contextual information about concepts by exploiting semantic concept graphs, and (iii) providing a principled way for the combination of KSs. Experiments evaluating our TC classifier in the context of Violence detection (VD) and Emergency Responses (ER) show promising results that significantly outperform various baseline models including an approach using a single KS without linked data and an approach using only Tweets. Copyright 2013 ACM.
Resumo:
This research aims to contribute to understanding the implementation of knowledge management systems (KMS) in the field of health through a case study, leading to theory building and theory extension. We use the concept of the business process approach to knowledge management as a theoretical lens to analyse and explore how a large teaching hospital developed, executed and practically implemented a KMS. A qualitative study was conducted over a 2.5 year period with data collected from semi-structured interviews with eight members of the strategic management team, 12 clinical users and 20 patients in addition to non-participant observation of meetings and documents. The theoretical propositions strategy was used as the overarching approach for data analysis. Our case study provides evidence that true patient centred approaches to supporting care delivery with a KMS benefit from process thinking at both the planning and implementation stages, and an emphasis on the knowledge demands resulting from: the activities along the care pathways; where cross-overs in care occur; and knowledge sharing for the integration of care. The findings also suggest that despite the theoretical awareness of KMS implementation methodologies, the actual execution of such systems requires practice and learning. Flexible, fluid approaches through rehearsal are important and communications strategies should focus heavily on transparency incorporating both structured and unstructured communication methods.
Resumo:
The aim of this study was to further knowledge development concerning the formation of a sense of identity and intimacy. This study drew on the growing recognition by many researchers in the psychosocial development field of the need to target interventions at the interface of the development of identity and intimacy. The specific aim of the study was to address the question of whether it would be possible to develop intervention procedures for fostering identity and intimacy exploration and development. Using both qualitative and quantitative measurements, the results appeared to clearly support an affirmative answer to this question. A total of sixty-three middle adolescent students from an urban, public high school participated in this study. Twenty-nine participants in the treatment group and 34 participants in the comparison group were pre- and post tested on measures of identity and intimacy. Participants in this study consisted of multiethnic, urban youth that presented themselves for relationship counseling. Repeated measures analysis of variance's (RMANOVA's), used to evaluate the impact of the intervention on the quantitative measures of identity and intimacy exploration, clearly supported the efficacy of the intervention. In addition, the findings also provided tentative support for the view that the increase in exploration that results from entering a period of active exploration is associated with a “loosening” of commitment. Finally, the findings of this study also contributed to the empirical knowledge-base about procedures for intervening with respect to the process of intimacy development. More specifically, both the qualitative and quantitative findings of this study began to shed some light on the potential impact of exploration for interpersonal insight as that as a process for fostering intimacy development. ^
Resumo:
The Ellison Executive Mentoring Inclusive Community Building (ICB) Model is a paradigm for initiating and implementing projects utilizing executives and professionals from a variety of fields and industries, university students, and pre-college students. The model emphasizes adherence to ethical values and promotes inclusiveness in community development. It is a hierarchical model in which actors in each succeeding level of operation serve as mentors to the next. Through a three-step process—content, process, and product—participants must be trained with this mentoring and apprenticeship paradigm in conflict resolution, and they receive sensitivity and diversity training through an interactive and dramatic exposition. ^ The content phase introduces participants to the model's philosophy, ethics, values and methods of operation. The process used to teach and reinforce its precepts is the mentoring and apprenticeship activities and projects in which the participants engage and whose end product demonstrates their knowledge and understanding of the model's concepts. This study sought to ascertain from the participants' perspectives whether the model's mentoring approach is an effective means of fostering inclusiveness, based upon their own experiences in using it. The research utilized a qualitative approach and included data from field observations, individual and group interviews, and written accounts of participants' attitudes. ^ Participants complete ICB projects utilizing The Ellison Model as a method of development and implementation. They generally perceive that the model is a viable tool for dealing with diversity issues whether at work, at school, or at home. The projects are also instructional in that whether participants are mentored or serve as apprentices, they gain useful skills and knowledge about their careers. Since the model is relatively new, there is ample room for research in a variety of areas including organizational studies to determine its effectiveness in combating problems related to various kinds of discrimination. ^
Resumo:
The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^