926 resultados para Model-Data Integration and Data Assimilation


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information management is a key aspect of successful construction projects. Having inaccurate measurements and conflicting data can lead to costly mistakes, and vague quantities can ruin estimates and schedules. Building information modeling (BIM) augments a 3D model with a wide variety of information, which reduces many sources of error and can detect conflicts before they occur. Because new technology is often more complex, it can be difficult to effectively integrate it with existing business practices. In this paper, we will answer two questions: How can BIM add value to construction projects? and What lessons can be learned from other companies that use BIM or other similar technology? Previous research focused on the technology as if it were simply a tool, observing problems that occurred while integrating new technology into existing practices. Our research instead looks at the flow of information through a company and its network, seeing all the actors as part of an ecosystem. Building upon this idea, we proposed the metaphor of an information supply chain to illustrate how BIM can add value to a construction project. This paper then concludes with two case studies. The first case study illustrates a failure in the flow of information that could have prevented by using BIM. The second case study profiles a leading design firm that has used BIM products for many years and shows the real benefits of using this program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data assimilation methods used for transient atmospheric state estimations in paleoclimatology such as covariance-based approaches, analogue techniques and nudging are briefly introduced. With applications differing widely, a plurality of approaches appears to be the logical way forward.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, a lot of effort has been spent in the efficient computation of kriging predictors when observations are assimilated sequentially. In particular, kriging update formulae enabling significant computational savings were derived. Taking advantage of the previous kriging mean and variance computations helps avoiding a costly matrix inversion when adding one observation to the TeX already available ones. In addition to traditional update formulae taking into account a single new observation, Emery (2009) proposed formulae for the batch-sequential case, i.e. when TeX new observations are simultaneously assimilated. However, the kriging variance and covariance formulae given in Emery (2009) for the batch-sequential case are not correct. In this paper, we fix this issue and establish correct expressions for updated kriging variances and covariances when assimilating observations in parallel. An application in sequential conditional simulation finally shows that coupling update and residual substitution approaches may enable significant speed-ups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dataset characterizes the evolution of western African precipitation indicated by marine sediment geochemical records in comparison to transient simulations using CCSM3 global climate model throughout the Last Interglacial (130-115 ka). It contains (1) defined tie-points (age models), newly published stable isotopes of benthic foraminifera and Al/Si log-ratios of eight marine sediment cores from the western African margin and (2) annual and seasonal rainfall anomalies (relative to pre-industrial values) for six characteristic latitudinal bands in western Africa simulated by CCSM3 (two transient simulations: one non-accelerated and one accelerated experiment).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of applying European default traffic values to the making of a noise map was evaluated in a typical environment like Palma de Mallorca. To assess these default traffic values, a first model has been created and compared with measured noise levels. Subsequently a second traffic model, improving the input data used for the first one, has been created and validated according to the deviations. Different methodologies were also examined for collecting model input data that would be of higher quality, by analysing the improvement generated in the reduction in the uncertainty of the noise map introduced by the road traffic noise emission

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this study is on the governance decisions in a concurrent channels context, in the case of uncertainty. The study examines how a firm chooses to deploy its sales force in times of uncertainty, and the subsequent performance outcome of those deployment choices. The theoretical framework is based on multiple theories of governance, including transaction cost analysis (TCA), agency theory, and institutional economics. Three uncertainty variables are investigated in this study. The first two are demand and competitive uncertainty which are considered to be industry-level market uncertainty forms. The third uncertainty, political uncertainty, is chosen as it is an important dimension of institutional environments, capturing non-economic circumstances such as regulations and political systemic issues. The study employs longitudinal secondary data from a Thai hotel chain, comprising monthly observations from January 2007 – December 2012. This hotel chain has its operations in 4 countries, Thailand, the Philippines, United Arab Emirates – Dubai, and Egypt, all of which experienced substantial demand, competitive, and political uncertainty during the study period. This makes them ideal contexts for this study. Two econometric models, both deploying Newey-West estimations, are employed to test 13 hypotheses. The first model considers the relationship between uncertainty and governance. The second model is a version of Newey-West, using an Instrumental Variables (IV) estimator and a Two-Stage Least Squares model (2SLS), to test the direct effect of uncertainty on performance and the moderating effect of governance on the relationship between uncertainty and performance. The observed relationship between uncertainty and governance observed follows a core prediction of TCA; that vertical integration is the preferred choice of governance when uncertainty rises. As for the subsequent performance outcomes, the results corroborate that uncertainty has a negative effect on performance. Importantly, the findings show that becoming more vertically integrated cannot help moderate the effect of demand and competitive uncertainty, but can significantly moderate the effect of political uncertainty. These findings have significant theoretical and practical implications, and extend our knowledge of the impact on uncertainty significantly, as well as bringing an institutional perspective to TCA. Further, they offer managers novel insight into the nature of different types of uncertainty, their impact on performance, and how channel decisions can mitigate these impacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large percentage of pile caps support only one column, and the pile caps in turn are supported by only a few piles. These are typically short and deep members with overall span-depth ratios of less than 1.5. Codes of practice do not provide uniform treatment for the design of these types of pile caps. These members have traditionally been designed as beams spanning between piles with the depth selected to avoid shear failures and the amount of longitudinal reinforcement selected to provide sufficient flexural capacity as calculated by the engineering beam theory. More recently, the strut-and-tie method has been used for the design of pile caps (disturbed or D-region) in which the load path is envisaged to be a three-dimensional truss, with compressive forces being supported by concrete compressive struts between the column and piles and tensile forces being carried by reinforcing steel located between piles. Both of these models have not provided uniform factors of safety against failure or been able to predict whether failure will occur by flexure (ductile mode) or shear (fragile mode). In this paper, an analytical model based on the strut-and-tie approach is presented. The proposed model has been calibrated using an extensive experimental database of pile caps subjected to compression and evaluated analytically for more complex loading conditions. It has been proven to be applicable across a broad range of test data and can predict the failures modes, cracking, yielding, and failure loads of four-pile caps with reasonable accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reports major results from collaborative research between France and Brazil on soil and water systems, carried out in the Upper Amazon Basin. It reveals the weathering processes acting in the partly inundated, low elevation plateaus of the Basin, mostly covered by evergreen forest. Our findings are based on geochemical data and mineral spectroscopy that probe the crystal chemistry of Fe and Al in mineral phases (mainly kaolinite, Al- and Fe-(hydr)oxides) of tropical soils (laterites). These techniques reveal crystal alterations in mineral populations of different ages and changes of metal speciation associated with mineral or organic phases. These results provide an integrated model of soil formation and changes (from laterites to podzols) in distinct hydrological compartments of the Amazon landscapes and under altered water regimes. (C) 2010 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the purpose of developing a longitudinal model to predict hand-and-foot syndrome (HFS) dynamics in patients receiving capecitabine, data from two large phase III studies were used. Of 595 patients in the capecitabine arms, 400 patients were randomly selected to build the model, and the other 195 were assigned for model validation. A score for risk of developing HFS was modeled using the proportional odds model, a sigmoidal maximum effect model driven by capecitabine accumulation as estimated through a kinetic-pharmacodynamic model and a Markov process. The lower the calculated creatinine clearance value at inclusion, the higher was the risk of HFS. Model validation was performed by visual and statistical predictive checks. The predictive dynamic model of HFS in patients receiving capecitabine allows the prediction of toxicity risk based on cumulative capecitabine dose and previous HFS grade. This dose-toxicity model will be useful in developing Bayesian individual treatment adaptations and may be of use in the clinic.