899 resultados para Isotropic and Anisotropic models
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
Visual observation of human actions provokes more motor activation than observation of robotic actions. We investigated the extent to which this visuomotor priming effect is mediated by bottom-up or top-down processing. The bottom-up hypothesis suggests that robotic movements are less effective in activating the ‘mirror system’ via pathways from visual areas via the superior temporal sulcus to parietal and premotor cortices. The top-down hypothesis postulates that beliefs about the animacy of a movement stimulus modulate mirror system activity via descending pathways from areas such as the temporal pole and prefrontal cortex. In an automatic imitation task, subjects performed a prespecified movement (e.g. hand opening) on presentation of a human or robotic hand making a compatible (opening) or incompatible (closing) movement. The speed of responding on compatible trials, compared with incompatible trials, indexed visuomotor priming. In the first experiment, robotic stimuli were constructed by adding a metal and wire ‘wrist’ to a human hand. Questionnaire data indicated that subjects believed these movements to be less animate than those of the human stimuli but the visuomotor priming effects of the human and robotic stimuli did not differ. In the second experiment, when the robotic stimuli were more angular and symmetrical than the human stimuli, human movements elicited more visuomotor priming than the robotic movements. However, the subjects’ beliefs about the animacy of the stimuli did not affect their performance. These results suggest that bottom-up processing is primarily responsible for the visuomotor priming advantage of human stimuli.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
Forecasts of precipitation and water vapor made by the Met Office global numerical weather prediction (NWP) model are evaluated using products from satellite observations by the Special Sensor Microwave Imager/Sounder (SSMIS) and Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) for June–September 2011, with a focus on tropical areas (308S–308N). Consistent with previous studies, the predicted diurnal cycle of precipitation peaks too early (by ;3 h) and the amplitude is too strong over both tropical ocean and land regions. Most of the wet and dry precipitation biases, particularly those over land, can be explained by the diurnal-cycle discrepancies. An overall wet bias over the equatorial Pacific and Indian Oceans and a dry bias over the western Pacific warmpool and India are linked with similar biases in the climate model, which shares common parameterizations with the NWP version. Whereas precipitation biases develop within hours in the NWP model, underestimates in water vapor (which are assimilated by the NWP model) evolve over the first few days of the forecast. The NWP simulations are able to capture observed daily-to-intraseasonal variability in water vapor and precipitation, including fluctuations associated with tropical cyclones.
Resumo:
Future changes in runoff can have important implications for water resources and flooding. In this study, runoff projections from ISI-MIP (Inter-sectoral Impact Model Inter-comparison Project) simulations forced with HadGEM2-ES bias-corrected climate data under the Representative Concentration Pathway 8.5 have been analysed for differences between impact models. Projections of change from a baseline period (1981-2010) to the future (2070-2099) from 12 impacts models which contributed to the hydrological and biomes sectors of ISI-MIP were studied. The biome models differed from the hydrological models by the inclusion of CO2 impacts and most also included a dynamic vegetation distribution. The biome and hydrological models agreed on the sign of runoff change for most regions of the world. However, in West Africa, the hydrological models projected drying, and the biome models a moistening. The biome models tended to produce larger increases and smaller decreases in regionally averaged runoff than the hydrological models, although there is large inter-model spread. The timing of runoff change was similar, but there were differences in magnitude, particularly at peak runoff. The impact of vegetation distribution change was much smaller than the projected change over time, while elevated CO2 had an effect as large as the magnitude of change over time projected by some models in some regions. The effect of CO2 on runoff was not consistent across the models, with two models showing increases and two decreases. There was also more spread in projections from the runs with elevated CO2 than with constant CO2. The biome models which gave increased runoff from elevated CO2 were also those which differed most from the hydrological models. Spatially, regions with most difference between model types tended to be projected to have most effect from elevated CO2, and seasonal differences were also similar, so elevated CO2 can partly explain the differences between hydrological and biome model runoff change projections. Therefore, this shows that a range of impact models should be considered to give the full range of uncertainty in impacts studies.
Resumo:
The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905) for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.
Resumo:
Observational analyses of running 5-year ocean heat content trends (Ht) and net downward top of atmosphere radiation (N) are significantly correlated (r~0.6) from 1960 to 1999, but a spike in Ht in the early 2000s is likely spurious since it is inconsistent with estimates of N from both satellite observations and climate model simulations. Variations in N between 1960 and 2000 were dominated by volcanic eruptions, and are well simulated by the ensemble mean of coupled models from the Fifth Coupled Model Intercomparison Project (CMIP5). We find an observation-based reduction in N of -0.31±0.21 Wm-2 between 1999 and 2005 that potentially contributed to the recent warming slowdown, but the relative roles of external forcing and internal variability remain unclear. While present-day anomalies of N in the CMIP5 ensemble mean and observations agree, this may be due to a cancellation of errors in outgoing longwave and absorbed solar radiation.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
Aerosol properties above clouds have been retrieved over the South East Atlantic Ocean during the fire season 2006 using satellite observations from POLDER (Polarization and Directionality of Earth Reflectances). From June to October, POLDER has observed a mean Above-Cloud Aerosol Optical Thickness (ACAOT) of 0.28 and a mean Above-Clouds Single Scattering Albedo (ACSSA) of 0.87 at 550 nm. These results have been used to evaluate the simulation of aerosols above clouds in 5 AeroCom (Aerosol Comparisons between Observations and Models) models (GOCART, HadGEM3, ECHAM5-HAM2, OsloCTM2 and SPRINTARS). Most models do not reproduce the observed large aerosol load episodes. The comparison highlights the importance of the injection height and the vertical transport parameterizations to simulate the large ACAOT observed by POLDER. Furthermore, POLDER ACSSA is best reproduced by models with a high imaginary part of black carbon refractive index, in accordance with recent recommendations.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.