891 resultados para potential models
Resumo:
Mathematical models have been vitally important in the development of technologies in building engineering. A literature review identifies that linear models are the most widely used building simulation models. The advent of intelligent buildings has added new challenges in the application of the existing models as an intelligent building requires learning and self-adjusting capabilities based on environmental and occupants' factors. It is therefore argued that the linearity is an impropriate basis for any model of either complex building systems or occupant behaviours for control or whatever purpose. Chaos and complexity theory reflects nonlinear dynamic properties of the intelligent systems excised by occupants and environment and has been used widely in modelling various engineering, natural and social systems. It is proposed that chaos and complexity theory be applied to study intelligent buildings. This paper gives a brief description of chaos and complexity theory and presents its current positioning, recent developments in building engineering research and future potential applications to intelligent building studies, which provides a bridge between chaos and complexity theory and intelligent building research.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
Reports that heat processing of foods induces the formation of acrylamide heightened interest in the chemistry, biochemistry, and safety of this compound. Acrylamide-induced neurotoxicity, reproductive toxicity, genotoxicity, and carcinogenicity are potential human health risks based on animal studies. Because exposure of humans to acrylamide can come from both external sources and the diet, there exists a need to develop a better understanding of its formation and distribution in food and its role in human health. To contribute to this effort, experts from eight countries have presented data on the chemistry, analysis, metabolism, pharmacology, and toxicology of acrylamide. Specifically covered are the following aspects: exposure from the environment and the diet; biomarkers of exposure; risk assessment; epidemiology; mechanism of formation in food; biological alkylation of amino acids, peptides, proteins, and DNA by acrylamide and its epoxide metabolite glycidamide; neurotoxicity, reproductive toxicity, and carcinogenicity; protection against adverse effects; and possible approaches to reducing levels in food. Cross-fertilization of ideas among several disciplines in which an interest in acrylamide has developed, including food science, pharmacology, toxicology, and medicine, will provide a better understanding of the chemistry and biology of acrylamide in food, and can lead to the development of food processes to decrease the acrylamide content of the diet.
Resumo:
Distributed computing paradigms for sharing resources such as Clouds, Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. While there are some success stories such as PlanetLab, OneLab, BOINC, BitTorrent, and SETI@home, a widespread use of these technologies for business applications has not yet been achieved. In a business environment, mechanisms are needed to provide incentives to potential users for participating in such networks. These mechanisms may range from simple non-monetary access rights, monetary payments to specific policies for sharing. Although a few models for a framework have been discussed (in the general area of a "Grid Economy"), none of these models has yet been realised in practice. This book attempts to fill this gap by discussing the reasons for such limited take-up and exploring incentive mechanisms for resource sharing in distributed systems. The purpose of this book is to identify research challenges in successfully using and deploying resource sharing strategies in open-source and commercial distributed systems.
Resumo:
Physiological evidence using Infrared Video Microscopy during the uncaging of glutamate has proven the existence of excitable calcium ion channels in spine heads, highlighting the need for reliable models of spines. In this study we compare the three main methods of simulating excitable spines: Baer & Rinzel's Continuum (B&R) model, Coombes' Spike-Diffuse-Spike (SDS) model and paired cable and ion channel equations (Cable model). Tests are done to determine how well the models approximate each other in terms of speed and heights of travelling waves. Significant quantitative differences are found between the models: travelling waves in the SDS model in particular are found to travel at much lower speeds and sometimes much higher voltages than in the Cable or B&R models. Meanwhile qualitative differences are found between the B&R and SDS models over realistic parameter ranges. The cause of these differences is investigated and potential solutions proposed.
Resumo:
This paper presents an approach for automatic classification of pulsed Terahertz (THz), or T-ray, signals highlighting their potential in biomedical, pharmaceutical and security applications. T-ray classification systems supply a wealth of information about test samples and make possible the discrimination of heterogeneous layers within an object. In this paper, a novel technique involving the use of Auto Regressive (AR) and Auto Regressive Moving Average (ARMA) models on the wavelet transforms of measured T-ray pulse data is presented. Two example applications are examined - the classi. cation of normal human bone (NHB) osteoblasts against human osteosarcoma (HOS) cells and the identification of six different powder samples. A variety of model types and orders are used to generate descriptive features for subsequent classification. Wavelet-based de-noising with soft threshold shrinkage is applied to the measured T-ray signals prior to modeling. For classi. cation, a simple Mahalanobis distance classi. er is used. After feature extraction, classi. cation accuracy for cancerous and normal cell types is 93%, whereas for powders, it is 98%.
Resumo:
Assimilation of physical variables into coupled physical/biogeochemical models poses considerable difficulties. One problem is that data assimilation can break relationships between physical and biological variables. As a consequence, biological tracers, especially nutrients, are incorrectly displaced in the vertical, resulting in unrealistic biogeochemical fields. To prevent this, we present the idea of applying an increment to the nutrient field within a data assimilating model to ensure that nutrient-potential density relationships are maintained within a water column during assimilation. After correcting the nutrients, it is assumed that other biological variables rapidly adjust to the corrected nutrient fields. We applied this method to a 17 year run of the 2° NEMO ocean-ice model coupled to the PlankTOM5 ecosystem model. Results were compared with a control with no assimilation, and with a model with physical assimilation but no nutrient increment. In the nutrient incrementing experiment, phosphate distributions were improved both at high latitudes and at the equator. At midlatitudes, assimilation generated unrealistic advective upwelling of nutrients within the boundary currents, which spread into the subtropical gyres resulting in more biased nutrient fields. This result was largely unaffected by the nutrient increment and is probably due to boundary currents being poorly resolved in a 2° model. Changes to nutrient distributions fed through into other biological parameters altering primary production, air-sea CO2 flux, and chlorophyll distributions. These secondary changes were most pronounced in the subtropical gyres and at the equator, which are more nutrient limited than high latitudes.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
An aquaplanet model is used to study the nature of the highly persistent low-frequency waves that have been observed in models forced by zonally symmetric boundary conditions. Using the Hayashi spectral analysis of the extratropical waves, the authors find that a quasi-stationary wave 5 belongs to a wave packet obeying a well-defined dispersion relation with eastward group velocity. The components of the dispersion relation with k ≥ 5 baroclinically convert eddy available potential energy into eddy kinetic energy, whereas those with k < 5 are baroclinically neutral. In agreement with Green’s model of baroclinic instability, wave 5 is weakly unstable, and the inverse energy cascade, which had been previously proposed as a main forcing for this type of wave, only acts as a positive feedback on its predominantly baroclinic energetics. The quasi-stationary wave is reinforced by a phase lock to an analogous pattern in the tropical convection, which provides further amplification to the wave. It is also found that the Pedlosky bounds on the phase speed of unstable waves provide guidance in explaining the latitudinal structure of the energy conversion, which is shown to be more enhanced where the zonal westerly surface wind is weaker. The wave’s energy is then trapped in the waveguide created by the upper tropospheric jet stream. In agreement with Green’s theory, as the equator-to-pole SST difference is reduced, the stationary marginally stable component shifts toward higher wavenumbers, while wave 5 becomes neutral and westward propagating. Some properties of the aquaplanet quasi-stationary waves are found to be in interesting agreement with a low frequency wave observed by Salby during December–February in the Southern Hemisphere so that this perspective on low frequency variability, apart from its value in terms of basic geophysical fluid dynamics, might be of specific interest for studying the earth’s atmosphere.
Resumo:
To test the effectiveness of stochastic single-chain models in describing the dynamics of entangled polymers, we systematically compare one such model; the slip-spring model; to a multichain model solved using stochastic molecular dynamics(MD) simulations (the Kremer-Grest model). The comparison involves investigating if the single-chain model can adequately describe both a microscopic dynamical and a macroscopic rheological quantity for a range of chain lengths. Choosing a particular chain length in the slip-spring model, the parameter values that best reproduce the mean-square displacement of a group of monomers is determined by fitting toMDdata. Using the same set of parameters we then test if the predictions of the mean-square displacements for other chain lengths agree with the MD calculations. We followed this by a comparison of the time dependent stress relaxation moduli obtained from the two models for a range of chain lengths. After identifying a limitation of the original slip-spring model in describing the static structure of the polymer chain as seen in MD, we remedy this by introducing a pairwise repulsive potential between the monomers in the chains. Poor agreement of the mean-square monomer displacements at short times can be rectified by the use of generalized Langevin equations for the dynamics and resulted in significantly improved agreement.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
The existence of sting jets as a potential source of damaging surface winds during the passage of extratropical cyclones has recently been recognized However, there are still very few published studies on the subject Furthermore, although ills known that other models are capable of reproducing sting jets, in the published literature only one numerical model [the Met Office Unified Model (MetUM)] has been used to numerically analyze these phenomena This article alms to improve our understanding of the processes that contribute to the development of sting jets and show that model differences affect the evolution of modeled sting jets A sting jet event during the passage of a cyclone over the United Kingdom on 26 February 2002 has been simulated using two mesoscale models namely the MetUM and the Consortium for Small Scale Modeling (COSMO) model to compare their performance Given the known critical importance of vertical resolution in the simulation of sting jets the vertical resolution of both models has been enhanced with respect to their operational versions Both simulations have been verified against surface measurements of maximum gusts, satellite imagery and Met Office operational synoptic analyses, as well as operational analyses from the ECMWF It is shown that both models are capable of reproducing sting jets with similar, though not identical. features Through the comparison of the results from these two models, the relevance of physical mechanisms, such as evaporative cooling and the release of conditional symmetric instability, in the generation and evolution of sting jets is also discussed
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.