869 resultados para BWW ontology
Resumo:
Building information modeling (BIM) is an emerging technology and process that provides rich and intelligent design information models of a facility, enabling enhanced communication, coordination, analysis, and quality control throughout all phases of a building project. Although there are many documented benefits of BIM for construction, identifying essential construction-specific information out of a BIM in an efficient and meaningful way is still a challenging task. This paper presents a framework that combines feature-based modeling and query processing to leverage BIM for construction. The feature-based modeling representation implemented enriches a BIM by representing construction-specific design features relevant to different construction management (CM) functions. The query processing implemented allows for increased flexibility to specify queries and rapidly generate the desired view from a given BIM according to the varied requirements of a specific practitioner or domain. Central to the framework is the formalization of construction domain knowledge in the form of a feature ontology and query specifications. The implementation of our framework enables the automatic extraction and querying of a wide-range of design conditions that are relevant to construction practitioners. The validation studies conducted demonstrate that our approach is significantly more effective than existing solutions. The research described in this paper has the potential to improve the efficiency and effectiveness of decision-making processes in different CM functions.
Resumo:
Finding and labelling semantic features patterns of documents in a large, spatial corpus is a challenging problem. Text documents have characteristics that make semantic labelling difficult; the rapidly increasing volume of online documents makes a bottleneck in finding meaningful textual patterns. Aiming to deal with these issues, we propose an unsupervised documnent labelling approach based on semantic content and feature patterns. A world ontology with extensive topic coverage is exploited to supply controlled, structured subjects for labelling. An algorithm is also introduced to reduce dimensionality based on the study of ontological structure. The proposed approach was promisingly evaluated by compared with typical machine learning methods including SVMs, Rocchio, and kNN.
Resumo:
In order to comprehend user information needs by concepts, this paper introduces a novel method to match relevance features with ontological concepts. The method first discovers relevance features from user local instances. Then, a concept matching approach is developed for matching these features to accurate concepts in a global knowledge base. This approach is significant for the transition of informative descriptor and conceptional descriptor. The proposed method is elaborately evaluated by comparing against three information gathering baseline models. The experimental results shows the matching approach is successful and achieves a series of remarkable improvements on search effectiveness.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
In recent years, there has been a growing interest from the design and construction community to adopt Building Information Models (BIM). BIM provides semantically-rich information models that explicitly represent both 3D geometric information (e.g., component dimensions), along with non-geometric properties (e.g., material properties). While the richness of design information offered by BIM is evident, there are still tremendous challenges in getting construction-specific information out of BIM, limiting the usability of these models for construction. In this paper, we describe our approach for extracting construction-specific design conditions from a BIM model based on user-defined queries. This approach leverages an ontology of features we are developing to formalize the design conditions that affect construction. Our current implementation analyzes the component geometry and topological relationships between components in a BIM model represented using the Industry Foundation Classes (IFC) to identify construction features. We describe the reasoning process implemented to extract these construction features, and provide a critique of the IFC’s to support the querying process. We use examples from two case studies to illustrate the construction features, the querying process, and the challenges involved in deriving construction features from an IFC model.
Resumo:
We propose CIMD (Collaborative Intrusion and Malware Detection), a scheme for the realization of collaborative intrusion detection approaches. We argue that teams, respectively detection groups with a common purpose for intrusion detection and response, improve the measures against malware. CIMD provides a collaboration model, a decentralized group formation and an anonymous communication scheme. Participating agents can convey intrusion detection related objectives and associated interests for collaboration partners. These interests are based on intrusion objectives and associated interests for collaboration partners. These interests are based on intrusion detection related ontology, incorporating network and hardware configurations and detection capabilities. Anonymous Communication provided by CIMD allows communication beyond suspicion, i.e. the adversary can not perform better than guessing an IDS to be the source of a message at random. The evaluation takes place with the help of NeSSi² (www.nessi2.de), the Network Security Simulator, a dedicated environment for analysis of attacks and countermeasures in mid-scale and large-scale networks. A CIMD prototype is being built based on the JIAC agent framework(www.jiac.de).
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
The present study examined the historical basis of the Australian disability income support system from 1908 to 2007. Although designed as a safety net for people with a disability, the disability income support system within Australia has been highly targeted. The original eligibility criteria of "permanently incapacitated for work", medical criteria and later "partially capacitated for work" potentially contained ideological inferences that permeated across the time period. This represents an important area for study given the potential consequence for disability income support to marginalise people with a disability. Social policy and disability policy theorists, including Saunders (2007, Social Policy Research Centre [SPRC]) and Gibilisco (2003) have provided valuable insight into some of the effects of disability policy and poverty. Yet while these theorists argued for some form of income support they did not propose a specific form of income security for further exploration. Few studies have undertaken a comprehensive review of the history of disability income support within the Australian context. This thesis sought to redress these gaps by examining disability income support policy within Australia. The research design consisted of an in-depth critical historical-comparative policy analysis methodology. The use of critical historical-comparative policy analysis allowed the researcher to trace the construction of disability within the Australian disability income support policy across four major historical epochs. A framework was developed specifically to guide analysis of the data. The critical discourse analysis method helped to understand the underlying ideological dimensions that led to the predominance of one particular approach over another. Given this, the research purpose of the study centred on: i. Tracing the history of the Australian disability income support system. ii. Examining the historical patterns and ideological assumptions over time. iii. Exploring the historical patterns and ideological assumptions underpinning an alternative model (Basic Income) and the extent to which each model promotes the social citizenship of people with a disability. The research commitment to a social-relational ontology and the quest for social change centred on the idea that "there has to be a better way" in the provision of disability income support. This theme of searching for an alternative reality in disability income support policy resonated throughout the thesis. This thesis found that the Australian disability income support system is disabling in nature and generates categories of disability on the basis of ableness. From the study, ableness became a condition for citizenship. This study acknowledged that, in reality, income support provision reflects only one aspect of the disabling nature of society which requires redressing. Although there are inherent tensions in any redistributive strategy, the Basic Income model potentially provides an alternative to the Australian disability income support system, given its grounding in social citizenship. The thesis findings have implications for academics, policy-makers and practitioners in terms of developing better ways to understand disability constructs in disability income support policy. The thesis also makes a contribution in terms of promoting income support policies based on the rights of all people, not just a few.
Resumo:
Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.
Resumo:
Increasingly, schools are being asked to meet the challenges of providing inclusive classrooms for all children. Inclusion is no longer about special education for a special group of students. It is about school improvement in order to bring about the changes that are needed to classroom practices to ensure the improvement of student learning outcomes. Inclusion is no longer a policy initiative. Rather it has been transformed to become a process that moves a school towards inclusive practices that will result in school improvement, heightened student learning outcomes and greater opportunities for all students to gain equal access to education. This study focuses on the challenge of diversity as it translates into implementing inclusive practices across two secondary school contexts. I have undertaken this research in my role as a Learning Support Teacher over a period of five years. Central to my research is a constructivist ontology and a practice epistemology that aligns with a practitioner research methodology of action research. Seven generalisable propositions have emerged from this research that inform the strategies I am using to more easily accommodate legislated inclusivitiy. These propositions include: 1. School communities need to share a common understanding of equity. 2. The school principal must provide overt leadership in moving towards an inclusive school culture. 3. A whole-school approach is needed to narrow the gap between inclusion rhetoric and classroom practice. 4. Pedagogical reform is the most effective strategy for catering for diverse student learning needs. 5. Differentiating curriculum is achieved when collaborative planning teams develop appropriate units of work. 6. School communities need to make a commitment to gather, share and manage relevant information concerning students. 7. The Learning Support Teacher needs to be repositioned within a curriculum planning team.
Resumo:
Conceptual modelling supports developers and users of information systems in areas of documentation, analysis or system redesign. The ongoing interest in the modelling of business processes has led to a variety of different grammars, raising the question of the quality of these grammars for modelling. An established way of evaluating the quality of a modelling grammar is by means of an ontological analysis, which can determine the extent to which grammars contain construct deficit, overload, excess or redundancy. While several studies have shown the relevance of most of these criteria, predictions about construct redundancy have yielded inconsistent results in the past, with some studies suggesting that redundancy may even be beneficial for modelling in practice. In this paper we seek to contribute to clarifying the concept of construct redundancy by introducing a revision to the ontological analysis method. Based on the concept of inheritance we propose an approach that distinguishes between specialized and distinct construct redundancy. We demonstrate the potential explanatory power of the revised method by reviewing and clarifying previous results found in the literature.
Resumo:
Text categorisation is challenging, due to the complex structure with heterogeneous, changing topics in documents. The performance of text categorisation relies on the quality of samples, effectiveness of document features, and the topic coverage of categories, depending on the employing strategies; supervised or unsupervised; single labelled or multi-labelled. Attempting to deal with these reliability issues in text categorisation, we propose an unsupervised multi-labelled text categorisation approach that maps the local knowledge in documents to global knowledge in a world ontology to optimise categorisation result. The conceptual framework of the approach consists of three modules; pattern mining for feature extraction; feature-subject mapping for categorisation; concept generalisation for optimised categorisation. The approach has been promisingly evaluated by compared with typical text categorisation methods, based on the ground truth encoded by human experts.
Resumo:
This thesis analysed the theoretical and ontological issues of previous scholarship concerning information technology and indigenous people. As an alternative, the thesis used the framework of actor-network-theory, especially through historiographical and ethnographic techniques. The thesis revealed an assemblage of indigenous/digital enactments striving for relevance and avoiding obsolescence. It also recognised heterogeneities- including user-ambivalences, oscillations, noise, non-coherences and disruptions - as part of the milieu of the daily digital lives of indigenous people. By taking heterogeneities into account, the thesis ensured that the data “speaks for itself” and that social inquiry is not overtaken by ideology and ontology.
Resumo:
The emic perspective of criminal desistance (ex-offenders’ personal explanations of how they gave up crime) is largely ignored by criminology. This thesis attempts to address this absence of the storyteller’s perspective by inviting desisters to participate in the exploration and interpretation of their individual desistance journeys. Significant attention is drawn to the importance of philosophical self-enquiry to personal change. This detailed journey through the desistance stories of five ex-offenders has produced an emphatic re-statement of the need for the non-judgemental listener as the beginning point of cathartic healing in damaged lives.