835 resultados para behavior-based systems
Resumo:
This book explores the processes for retrieval, classification, and integration of construction images in AEC/FM model based systems. The author describes a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval that have been integrated into a novel method for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks. objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.
Resumo:
The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.
Resumo:
In the modern and dynamic construction environment it is important to access information in a fast and efficient manner in order to improve the decision making processes for construction managers. This capability is, in most cases, straightforward with today’s technologies for data types with an inherent structure that resides primarily on established database structures like estimating and scheduling software. However, previous research has demonstrated that a significant percentage of construction data is stored in semi-structured or unstructured data formats (text, images, etc.) and that manually locating and identifying such data is a very hard and time-consuming task. This paper focuses on construction site image data and presents a novel image retrieval model that interfaces with established construction data management structures. This model is designed to retrieve images from related objects in project models or construction databases using location, date, and material information (extracted from the image content with pattern recognition techniques).
Resumo:
A new carrier frequency offset estimation scheme in orthogonal frequency division multiplexing (OFDM) is proposed. The scheme includes coarse frequency offset estimation and fine frequency offset estimation. The coarse frequency offset estimation method we present is a improvement of Zhang's method. The estimation range of the new method is as large as the overall signal-band width. A new fine frequency offset estimation algorithm is also discussed in this paper. The new algorithm has a better performance than the Schmidl's algorithm. The system we use to calculate and simulate is based on the high rate WLAN standard adopted by the IEEE 802.11 stanidardization group. Numerical results are presented to demonstrate the performance of the proposed algorithm.
Resumo:
A-new-carrier-frequency offset estimation scheme in orthogonal frequency division multiplexing (OFDM) is proposed. The scheme includes coarse frequency offset estimation and fine frequency offset estimation. The coarse frequency offset estimation method we present is a improvement of Zhang's method. The estimation range of the new method is as large as the overall signal-band width. A new fine frequency offset estimation algorithm is also discussed in this paper. The new algorithm has a better performance than the Schmidt's algorithm. The system we use to calculate and simulate is based on the high rate WLAN standard adopted by the IEEE 802.11 standardization group. Numerical results are presented to demonstrate the performance of the proposed algorithm.
Resumo:
Ontologies play a core role to provide shared knowledge models to semantic-driven applications targeted by Semantic Web. Ontology metrics become an important area because they can help ontology engineers to assess ontology and better control project management and development of ontology based systems, and therefore reduce the risk of project failures. In this paper, we propose a set of ontology cohesion metrics which focuses on measuring (possibly inconsistent) ontologies in the context of dynamic and changing Web. They are: Number of Ontology Partitions (NOP), Number of Minimally Inconsistent Subsets (NMIS) and Average Value of Axiom Inconsistencies (AVAI). These ontology metrics are used to measure ontological semantics rather than ontological structure. They are theoretically validated for ensuring their theoretical soundness, and further empirically validated by a standard test set of debugging ontologies. The related algorithms to compute these ontology metrics also are discussed. These metrics proposed in this paper can be used as a very useful complementarity of existing ontology cohesion metrics.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
A zone based systems design framework is described and utilised in the implementation of a message authentication code (MAC) algorithm based on symmetric key block ciphers. The resulting block cipher based MAC algorithm may be used to provide assurance of the authenticity and, hence, the integrity of binary data. Using software simulation to benchmark against the de facto cipher block chaining MAC (CBC-MAC) variant used in the TinySec security protocol for wireless sensor networks and the NIST cipher block chaining MAC standard, CMAC; we show that our zone based systems design framework can lead to block cipher based MAC constructs that point to improvements in message processing efficiency, processing throughput and processing latency.
Resumo:
Currently there is extensive theoretical work on inconsistencies in logic-based systems. Recently, algorithms for identifying inconsistent clauses in a single conjunctive formula have demonstrated that practical application of this work is possible. However, these algorithms have not been extended for full knowledge base systems and have not been applied to real-world knowledge. To address these issues, we propose a new algorithm for finding the inconsistencies in a knowledge base using existing algorithms for finding inconsistent clauses in a formula. An implementation of this algorithm is then presented as an automated tool for finding inconsistencies in a knowledge base and measuring the inconsistency of formulae. Finally, we look at a case study of a network security rule set for exploit detection (QRadar) and suggest how these automated tools can be applied.
Resumo:
A presente dissertação descreve a síntese e o estudo de propriedades fotofísicas e biológicas de sistemas baseados em ftalocianinas. Os sistemas sintetizados consistem em ftalocianinas contendo unidades de D-galactose, ciclodextrinas, [60]fulereno ou porfirinas. Na primeira parte da dissertação aborda-se a síntese de ftalocianinas substituídas com unidades de D-galactose, com o intuito de serem utilizadas como fotossensibilizadores em terapia fotodinâmica. A eficiência de geração de oxigénio singuleto induzida pelas glicoftalocianinas, bem como a sua actividade fotodinâmica in vitro em células HeLa são avaliadas. Adicionalmente são descritos os estudos de MALDI-MS/MS de dois pares de glicoftalocianinas isoméricas. Numa segunda parte, descrevem-se a síntese e a caracterização de sistemas supramoleculares de ftalocianinas ligadas a ciclodextrinas ou [60]fulereno. Foram desenvolvidas metodologias sintéticas para obter complexos ftalocianínicos de ruténio(II) substituídos axialmente com derivados de ciclodextrinas, e complexos ftalocianínicos de zinco(II) ligados covalentemente a ciclodextrinas. Descreve-se também a síntese de uma díade ftalocianina- [60]fulereno. A estratégia sintética envolve um peculiar primeiro passo, uma reacção de aminação redutiva, no qual se obtém uma ftalocianina funcionalizada com glicina, seguida da habitual reacção de cicloadição 1,3- dipolar. Na terceira parte, abordam-se sistemas dador-aceitador de díades porfirinaftalocianina, como modelos promissores para mimetizar sistemas fotossintéticos naturais. Diferentes tipos de díades porfirina-ftalocianina, ligadas através do grupo fenilo da posição meso ou da posição β-pirrólica da porfirina, são sintetizadas e caracterizadas. As abordagens sintéticas envolvem reacções de aminação catalisadas por paládio e reacções de condensação cruzada de dois ftalonitrilos adequadamente substituídos. Também se descrevem processos de transferência de energia e electrónica fotoinduzidos que se formam a partir das díades porfirina-ftalocianina.