850 resultados para building information modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information provision to address the changing requirements can be best supported by content management. The Current information technology enables information to be stored and provided from various distributed sources. To identify and retrieve relevant information requires effective mechanisms for information discovery and assembly. This paper presents a method, which enables the design of such mechanisms, with a set of techniques for articulating and profiling users' requirements, formulating information provision specifications, realising management of information content in repositories, and facilitating response to the user's requirements dynamically during the process of knowledge construction. These functions are represented in an ontology which integrates the capability of the mechanisms. The ontological modelling in this paper has adopted semiotics principles with embedded norms to ensure coherent course of actions represented in these mechanisms. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large and complex IT project may involve multiple organizations and be constrained within a temporal period. An organization is a system comprising of people, activities, processes, information, resources and goals. Understanding and modelling such a project and its interrelationship with relevant organizations are essential for organizational project planning. This paper introduces the problem articulation method (PAM) as a semiotic method for organizational infrastructure modelling. PAM offers a suite of techniques, which enables the articulation of the business, technical and organizational requirements, delivering an infrastructural framework to support the organization. It works by eliciting and formalizing (e. g. processes, activities, relationships, responsibilities, communications, resources, agents, dependencies and constraints) and mapping these abstractions to represent the manifestation of the "actual" organization. Many analysts forgo organizational modelling methods and use localized ad hoc and point solutions, but this is not amenable for organizational infrastructures modelling. A case study of the infrared atmospheric sounding interferometer (IASI) will be used to demonstrate the applicability of PAM, and to examine its relevancy and significance in dealing with the innovation and changes in the organizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People's interaction with the indoor environment plays a significant role in energy consumption in buildings. Mismatching and delaying occupants' feedback on the indoor environment to the building energy management system is the major barrier to the efficient energy management of buildings. There is an increasing trend towards the application of digital technology to support control systems in order to achieve energy efficiency in buildings. This article introduces a holistic, integrated, building energy management model called `smart sensor, optimum decision and intelligent control' (SMODIC). The model takes into account occupants' responses to the indoor environments in the control system. The model of optimal decision-making based on multiple criteria of indoor environments has been integrated into the whole system. The SMODIC model combines information technology and people centric concepts to achieve energy savings in buildings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Group on Earth Observations System of Systems, GEOSS, is a co-ordinated initiative by many nations to address the needs for earth-system information expressed by the 2002 World Summit on Sustainable Development. We discuss the role of earth-system modelling and data assimilation in transforming earth-system observations into the predictive and status-assessment products required by GEOSS, across many areas of socio-economic interest. First we review recent gains in the predictive skill of operational global earth-system models, on time-scales of days to several seasons. We then discuss recent work to develop from the global predictions a diverse set of end-user applications which can meet GEOSS requirements for information of socio-economic benefit; examples include forecasts of coastal storm surges, floods in large river basins, seasonal crop yield forecasts and seasonal lead-time alerts for malaria epidemics. We note ongoing efforts to extend operational earth-system modelling and assimilation capabilities to atmospheric composition, in support of improved services for air-quality forecasts and for treaty assessment. We next sketch likely GEOSS observational requirements in the coming decades. In concluding, we reflect on the cost of earth observations relative to the modest cost of transforming the observations into information of socio-economic value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A palladium-catalyzed Stille coupling reaction was employed as a versatile method for the synthesis of a novel terpyridine-pincer (3, TPBr) bridging ligand, 4'-{4-BrC6H2(CH2NMe2)(2)-3,5}-2,2':6',2 ''-terpyridine. Mononuclear species [PdX(TP)] (X = Br, Cl), [Ru(TPBr)(tpy)](PF6)(2), and [Ru(TPBr)(2)](PF6)(2), synthesized by selective metalation of the NCNBr-pincer moiety or complexation of the terpyridine of the bifunctional ligand TPBr, were used as building blocks for the preparation of heterodi- and trimetallic complexes [Ru(TPPdCl)(tpy)](PF6)(2) (7) and [Ru(TPPdCl)(2)]-(PF6)(2) (8). The molecular structures in the solid state of [PdBr(TP)] (4a) and [Ru(TPBr)(2)](PF6)(2) (6) have been determined by single-crystal X-ray analysis. Electrochemical behavior and photophysical properties of the mono-and heterometallic complexes are described. All the above di- and trimetallic Ru complexes exhibit absorption bands attributable to (MLCT)-M-1 (Ru -> tpy) transitions. For the heteroleptic complexes, the transitions involving the unsubstituted tpy ligand are at a lower energy than the tpy moiety of the TPBr ligand. The absorption bands observed in the electronic spectra for TPBr and [PdCl(TP)] have been assigned with the aid of TD-DFT calculations. All complexes display weak emission both at room temperature and in a butyronitrile glass at 77 K. The considerable red shift of the emission maxima relative to the signal of the reference compound [Ru(tpy)(2)](2+) indicates stabilization of the luminescent (MLCT)-M-3 state. For the mono- and heterometallic complexes, electrochemical and spectroscopic studies (electronic absorption and emission spectra and luminescence lifetimes recorded at room temperature and 77 K in nitrile solvents), together with the information gained from IR spectroelectrochemical studies of the dimetallic complex [Ru(TPPdSCN)(tpy)](PF6)(2), are indicative of charge redistribution through the bridging ligand TPBr. The results are in line with a weak coupling between the {Ru(tpy)(2)} chromophoric unit and the (non)metalated NCN-pincer moiety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a number of challenges associated with managing knowledge and information in construction organizations delivering major capital assets. These include the ever-increasing volumes of information, losing people because of retirement or competitors, the continuously changing nature of information, lack of methods on eliciting useful knowledge, development of new information technologies and changes in management and innovation practices. Existing tools and methodologies for valuing intangible assets in fields such as engineering, project management and financial, accounting, do not address fully the issues associated with the valuation of information and knowledge. Information is rarely recorded in a way that a document can be valued, when either produced or subsequently retrieved and re-used. In addition there is a wealth of tacit personal knowledge which, if codified into documentary information, may prove to be very valuable to operators of the finished asset or future designers. This paper addresses the problem of information overload and identifies the differences between data, information and knowledge. An exploratory study was conducted with a leading construction consultant examining three perspectives (business, project management and document management) by structured interviews and specifically how to value information in practical terms. Major challenges in information management are identified. An through-life Information Evaluation methodology (IEM) is presented to reduce information overload and to make the information more valuable in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.