32 resultados para Conceptual site models
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Using a literature review, we argue that new models of peatland development are needed. Many existing models do not account for potentially important ecohydrological feedbacks, and/or ignore spatial structure and heterogeneity. Existing models, including those that simulate a near total loss of the northern peatland carbon store under a warming climate, may produce misleading results because they rely upon oversimplified representations of ecological and hydrological processes. In this, the first of a pair of papers, we present the conceptual framework for a model of peatland development, DigiBog, which considers peatlands as complex adaptive systems. DigiBog accounts for the interactions between the processes which govern litter production and peat decay, peat soil hydraulic properties, and peatland water-table behaviour, in a novel and genuinely ecohydrological manner. DigiBog consists of a number of interacting submodels, each representing a different aspect of peatland ecohydrology. Here we present in detail the mathematical and computational basis, as well as the implementation and testing, of the hydrological submodel. Remaining submodels are described and analysed in the accompanying paper. Tests of the hydrological submodel against analytical solutions for simple aquifers were highly successful: the greatest deviation between DigiBog and the analytical solutions was 2·83%. We also applied the hydrological submodel to irregularly shaped aquifers with heterogeneous hydraulic properties—situations for which no analytical solutions exist—and found the model's outputs to be plausible.
Resumo:
The aim of this paper is to critically examine the application of development appraisal to viability assessment in the planning system. This evaluation is of development appraisal models in general and also their use in particular applications associated with estimating planning obligation capacity. The paper is organised into four themes: · The context and conceptual basis for development viability appraisal · A review of development viability appraisal methods · A discussion of selected key inputs into a development viability appraisal · A discussion of the applications of development viability appraisals in the planning system It is assumed that readers are familiar with the basic models and information needs of development viability appraisal rather than at the cutting edge of practice and/or academe
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
This Atlas presents statistical analyses of the simulations submitted to the Aqua-Planet Experiment (APE) data archive. The simulations are from global Atmospheric General Circulation Models (AGCM) applied to a water-covered earth. The AGCMs include ones actively used or being developed for numerical weather prediction or climate research. Some are mature, application models and others are more novel and thus less well tested in Earth-like applications. The experiment applies AGCMs with their complete parameterization package to an idealization of the planet Earth which has a greatly simplified lower boundary that consists of an ocean only. It has no land and its associated orography, and no sea ice. The ocean is represented by Sea Surface Temperatures (SST) which are specified everywhere with simple, idealized distributions. Thus in the hierarchy of tests available for AGCMs, APE falls between tests with simplified forcings such as those proposed by Held and Suarez (1994) and Boer and Denis (1997) and Earth-like simulations of the Atmospheric Modeling Intercomparison Project (AMIP, Gates et al., 1999). Blackburn and Hoskins (2013) summarize the APE and its aims. They discuss where the APE fits within a modeling hierarchy which has evolved to evaluate complete models and which provides a link between realistic simulation and conceptual models of atmospheric phenomena. The APE bridges a gap in the existing hierarchy. The goals of APE are to provide a benchmark of current model behaviors and to stimulate research to understand the cause of inter-model differences., APE is sponsored by the World Meteorological Organization (WMO) joint Commission on Atmospheric Science (CAS), World Climate Research Program (WCRP) Working Group on Numerical Experimentation (WGNE). Chapter 2 of this Atlas provides an overview of the specification of the eight APE experiments and of the data collected. Chapter 3 lists the participating models and includes brief descriptions of each. Chapters 4 through 7 present a wide variety of statistics from the 14 participating models for the eight different experiments. Additional intercomparison figures created by Dr. Yukiko Yamada in AGU group are available at http://www.gfd-dennou.org/library/ape/comparison/. This Atlas is intended to present and compare the statistics of the APE simulations but does not contain a discussion of interpretive analyses. Such analyses are left for journal papers such as those included in the Special Issue of the Journal of the Meteorological Society of Japan (2013, Vol. 91A) devoted to the APE. Two papers in that collection provide an overview of the simulations. One (Blackburn et al., 2013) concentrates on the CONTROL simulation and the other (Williamson et al., 2013) on the response to changes in the meridional SST profile. Additional papers provide more detailed analysis of the basic simulations, while others describe various sensitivities and applications. The APE experiment data base holds a wealth of data that is now publicly available from the APE web site: http://climate.ncas.ac.uk/ape/. We hope that this Atlas will stimulate future analyses and investigations to understand the large variation seen in the model behaviors.
Resumo:
Purpose – This paper aims to provide a brief re´sume´ of previous research which has analysed the impact of e-commerce on retail real estate in the UK, and to examine the important marketing role of the internet for shopping centre managers, and retail landlords. Design/methodology/approach – Based on the results from a wider study carried out in 2003, the paper uses case studies from two different shopping centres in the UK, and documents the innovative uses of both web-based marketing and online retailing by organisations that historically have not directly been involved in the retailing process. Findings – The paper highlights the importance of considering online sales within a multi-channel approach to retailing. The two types of emerging shopping centre model which are identified are characterised by their ultimate relationship with the physical shopping centre on whose web site they reside. These can be summarised as: the “centre-led” approach, and the “brand-led” or “marketing-led” approach. Research limitations/implications – The research is based on a limited number of in-depth case studies and secondary data. Further research is needed to monitor the continuing impact of e-commerce on retail property and the marketing strategies of shopping centre managers and owners. Practical implications – Internet-based sales provide an important adjunct to conventional retail sales and an important source of potential risk for landlords and tenants in the real estate investment market. Regardless of whether retailers use the internet as a sales channel, as a product-sourcing tool, or merely to provide information to the consumer, the internet has become a keystone within the greater retail marketing mix. The findings have ramifications for understanding the way in which landlords are structuring their retail property to defray potential risks. Originality/value – The paper examines shopping centre online marketing models for the first time in detail, and will be of value to retail occupiers, owners and other stakeholders of shopping centres.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Most prominent models of bilingual representation assume a degree of interconnection or shared representation at the conceptual level. However, in the context of linguistic and cultural specificity of human concepts, and given recent findings that reveal a considerable amount of bidirectional conceptual transfer and conceptual change in bilinguals, a particular challenge that bilingual models face is to account for non-equivalence or partial equivalence of L1 and L2 specific concepts in bilingual conceptual store. The aim of the current paper is to provide a state-of-the-art review of the available empirical evidence from the fields of psycholinguistics, cognitive, experimental, and cross-cultural psychology, and discuss how these may inform and develop further traditional and more recent accounts of bilingual conceptual representation. Based on a synthesis of the available evidence against theoretical postulates of existing models, I argue that the most coherent account of bilingual conceptual representation combines three fundamental assumptions. The first one is the distributed, multi-modal nature of representation. The second one concerns cross-linguistic and cross-cultural variation of concepts. The third one makes assumptions about the development of concepts, and the emergent links between those concepts and their linguistic instantiations.
Resumo:
There is ongoing work on conceptual modelling of such busi- ness notions as Affordance and Capability. We have found that such business notions as Affordance and Capability are constructively defned using elements and properties of exe- cutable behaviour models. In this paper, we clarify the def- initions of Affordance and Capability using Coloured Petri Nets and Protocol models.The illustrating case is the process of drug injection. We show that different behaviour modelling techniques provide different precision for definition of Affordance and Capability and clarify the conceptual models of these notions. We generalise that the behaviour models can be used to improve the precision of conceptualization.
Resumo:
The aim of this study was to evaluate and improve the accuracy of plant uptake models for neutral hydrophobic organic pollutants (1 < logKOW < 9, −8 < logKAW < 0) used in regulatory exposure assessment tools, using uncertainty and sensitivity analyses. The models considered were RAIDAR, EUSES, CSOIL, CLEA, and CalTOX. In this research, CSOIL demonstrated the best performance of all five exposure assessment tools for root uptake from polluted soil in comparison with observed data, but no model predicted shoot uptake well. Recalibration of the transpiration and volatilisation parameters improved the performance of CSOIL and CLEA. The dominant pathway for shoot uptake simulated differed according to the properties of the chemical under consideration; those with a higher air–water partition coefficient were transported into shoots via the soil-air-plant pathway, while chemicals with a lower octanol–water partition coefficient and air–water partition coefficient were transported via the root. The soil organic carbon content was a particularly sensitive parameter in each model and using a site specific value improved model performance.
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
Longitudinal flow bursts observed by the European Incoherent Scatter (EISCAT) radar, in association with dayside auroral transients observed from Svalbard, have been interpreted as resulting from pulses of enhanced reconnection at the dayside magnetopause. However, an alternative model has recently been proposed for a steady rate of magnetopause reconnection, in which the bursts of longitudinal flow are due to increases in the field line curvature force, associated with the By component of the magnetosheath field. We here evaluate these two models, using observations on January 20, 1990, by EISCAT and a 630-nm all-sky camera at Ny Ålesund. For both models, we predict the behavior of both the dayside flows and the 630-nm emissions on newly opened field lines. It is shown that the signatures of steady reconnection and magnetosheath By changes could possibly resemble the observed 630-nm auroral events, but only for certain locations of the observing site, relative to the ionospheric projection of the reconnection X line: however, in such cases, the flow bursts would be seen between the 630-nm transients and not within them. On the other hand, the model of reconnection rate pulses predicts that the flows will be enhanced within each 630-nm transient auroral event. The observations on January 20, 1990, are shown to be consistent with the model of enhanced reconnection rate pulses over a background level and inconsistent with the effects of periodic enhancements of the magnitude of the magnetosheath By component. We estimate that the reconnection rate within the pulses would have to be at least an order of magnitude larger than the background level between the pulses.