172 resultados para Emerging Modelling Paradigms and Model Coupling
Resumo:
We survey the literature on spatial bio-economic and land-use modelling and assess its thematic development. Unobserved site-specific heterogeneity is a feature of almost all the surveyed works, and this feature, it seems, has stimulated significant methodological innovation. In an attempt to improve the suitability with which the prototype incorporates heterogeneity, we consider modelling alternatives and extensions. We discuss solutions and conjecture others.
Resumo:
We survey the literature on spatial bio-economic and land-use modelling and assess its thematic development. Unobserved site-specific heterogeneity is a feature of almost all the surveyed works, and this feature, it seems, has stimulated significant methodological innovation. In an attempt to improve the suitability with which the prototype incorporates heterogeneity, we consider modelling alternatives and extensions. We discuss solutions and conjecture others.
Resumo:
In this work a method for building multiple-model structures is presented. A clustering algorithm that uses data from the system is employed to define the architecture of the multiple-model, including the size of the region covered by each model, and the number of models. A heating ventilation and air conditioning system is used as a testbed of the proposed method.
Resumo:
The new HadKPP atmosphere–ocean coupled model is described and then used to determine the effects of sub-daily air–sea coupling and fine near-surface ocean vertical resolution on the representation of the Northern Hemisphere summer intra-seasonal oscillation. HadKPP comprises the Hadley Centre atmospheric model coupled to the K Profile Parameterization ocean-boundary-layer model. Four 30-member ensembles were performed that varied in oceanic vertical resolution between 1 m and 10 m and in coupling frequency between 3 h and 24 h. The 10 m, 24 h ensemble exhibited roughly 60% of the observed 30–50 day variability in sea-surface temperatures and rainfall and very weak northward propagation. Enhancing either only the vertical resolution or only the coupling frequency produced modest improvements in variability and only a standing intra-seasonal oscillation. Only the 1 m, 3 h configuration generated organized, northward-propagating convection similar to observations. Sub-daily surface forcing produced stronger upper-ocean temperature anomalies in quadrature with anomalous convection, which likely affected lower-atmospheric stability ahead of the convection, causing propagation. Well-resolved air–sea coupling did not improve the eastward propagation of the boreal summer intra-seasonal oscillation in this model. Upper-ocean vertical mixing and diurnal variability in coupled models must be improved to accurately resolve and simulate tropical sub-seasonal variability. In HadKPP, the mere presence of air–sea coupling was not sufficient to generate an intra-seasonal oscillation resembling observations.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
An error polynomial is defined, the coefficients of which indicate the difference at any instant between a system and a model of lower order approximating the system. It is shown how Markov parameters and time series proportionals of the model can be matched with those of the system by setting error polynomial coefficients to zero. Also discussed is the way in which the error between system and model can be considered as being a filtered form of an error input function specified by means of model parameter selection.
Resumo:
As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
The DNA G-qadruplexes are one of the targets being actively explored for anti-cancer therapy by inhibiting them through small molecules. This computational study was conducted to predict the binding strengths and orientations of a set of novel dimethyl-amino-ethyl-acridine (DACA) analogues that are designed and synthesized in our laboratory, but did not diffract in Synchrotron light.Thecrystal structure of DNA G-Quadruplex(TGGGGT)4(PDB: 1O0K) was used as target for their binding properties in our studies.We used both the force field (FF) and QM/MM derived atomic charge schemes simultaneously for comparing the predictions of drug binding modes and their energetics. This study evaluates the comparative performance of fixed point charge based Glide XP docking and the quantum polarized ligand docking schemes. These results will provide insights on the effects of including or ignoring the drug-receptor interfacial polarization events in molecular docking simulations, which in turn, will aid the rational selection of computational methods at different levels of theory in future drug design programs. Plenty of molecular modelling tools and methods currently exist for modelling drug-receptor or protein-protein, or DNA-protein interactionssat different levels of complexities.Yet, the capasity of such tools to describevarious physico-chemical propertiesmore accuratelyis the next step ahead in currentresearch.Especially, the usage of most accurate methods in quantum mechanics(QM) is severely restricted by theirtedious nature. Though the usage of massively parallel super computing environments resulted in a tremendous improvement in molecular mechanics (MM) calculations like molecular dynamics,they are still capable of dealing with only a couple of tens to hundreds of atoms for QM methods. One such efficient strategy that utilizes thepowers of both MM and QM are the QM/MM hybrid methods. Lately, attempts have been directed towards the goal of deploying several different QM methods for betterment of force field based simulations, but with practical restrictions in place. One of such methods utilizes the inclusion of charge polarization events at the drug-receptor interface, that is not explicitly present in the MM FF.
Resumo:
In recent years, various efforts have been made in air traffic control (ATC) to maintain traffic safety and efficiency in the face of increasing air traffic demands. ATC is a complex process that depends to a large degree on human capabilities, and so understanding how controllers carry out their tasks is an important issue in the design and development of ATC systems. In particular, the human factor is considered to be a serious problem in ATC safety and has been identified as a causal factor in both major and minor incidents. There is, therefore, a need to analyse the mechanisms by which errors occur due to complex factors and to develop systems that can deal with these errors. From the cognitive process perspective, it is essential that system developers have an understanding of the more complex working processes that involve the cooperative work of multiple controllers. Distributed cognition is a methodological framework for analysing cognitive processes that span multiple actors mediated by technology. In this research, we attempt to analyse and model interactions that take place in en route ATC systems based on distributed cognition. We examine the functional problems in an ATC system from a human factors perspective, and conclude by identifying certain measures by which to address these problems. This research focuses on the analysis of air traffic controllers' tasks for en route ATC and modelling controllers' cognitive processes.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
This paper provides a comparative study of the performance of cross-flow and counter-flow M-cycle heat exchangers for dew point cooling. It is recognised that evaporative cooling systems offer a low energy alternative to conventional air conditioning units. Recently emerged dew point cooling, as the renovated evaporative cooling configuration, is claimed to have much higher cooling output over the conventional evaporative modes owing to use of the M-cycle heat exchangers. Cross-flow and counter-flow heat exchangers, as the available structures for M-cycle dew point cooling processing, were theoretically and experimentally investigated to identify the difference in cooling effectiveness of both under the parallel structural/operational conditions, optimise the geometrical sizes of the exchangers and suggest their favourite operational conditions. Through development of a dedicated computer model and case-by-case experimental testing and validation, a parametric study of the cooling performance of the counter-flow and cross-flow heat exchangers was carried out. The results showed the counter-flow exchanger offered greater (around 20% higher) cooling capacity, as well as greater (15%–23% higher) dew-point and wet-bulb effectiveness when equal in physical size and under the same operating conditions. The cross-flow system, however, had a greater (10% higher) Energy Efficiency (COP). As the increased cooling effectiveness will lead to reduced air volume flow rate, smaller system size and lower cost, whilst the size and cost are the inherent barriers for use of dew point cooling as the alternation of the conventional cooling systems, the counter-flow system is considered to offer practical advantages over the cross-flow system that would aid the uptake of this low energy cooling alternative. In line with increased global demand for energy in cooling of building, largely by economic booming of emerging developing nations and recognised global warming, the research results will be of significant importance in terms of promoting deployment of the low energy dew point cooling system, helping reduction of energy use in cooling of buildings and cut of the associated carbon emission.
Resumo:
The Metafor project has developed a common information model (CIM) using the ISO19100 series for- malism to describe numerical experiments carried out by the Earth system modelling community, the models they use, and the simulations that result. Here we describe the mechanism by which the CIM was developed, and its key properties. We introduce the conceptual and application ver- sions and the controlled vocabularies developed in the con- text of supporting the fifth Coupled Model Intercomparison Project (CMIP5). We describe how the CIM has been used in experiments to describe model coupling properties and de- scribe the near term expected evolution of the CIM.