94 resultados para Reduced physical models
em CentAUR: Central Archive University of Reading - UK
Resumo:
The Routh-stability method is employed to reduce the order of discrete-time system transfer functions. It is shown that the Routh approximant is well suited to reduce both the denominator and the numerator polynomials, although alternative methods, such as PadÃ�Â(c)-Markov approximation, are also used to fit the model numerator coefficients.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3 s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Considerable attention has been given to the impact of climate change on avian populations over the last decade. In this paper we examine two issues with respect to coastal bird populations in the UK: (1) is there any evidence that current populations are declining due to climate change, and (2) how might we predict the response of populations in the future? We review the cause of population decline in two species associated with saltmarsh habitats. The abundance of Common Redshank Tringa totanus breeding on saltmarsh declined by about 23% between the mid-1980s and mid-1990s, but the decline appears to have been caused by an increase in grazing pressure. The number of Twite Carduelis flavirostris wintering on the coast of East Anglia has declined dramatically over recent decades; there is evidence linking this decline with habitat loss but a causal role for climate change is unclear. These examples illustrate that climate change could be having population-level impacts now, but also show that it is dangerous to become too narrowly focused on single issues affecting coastal birds. Making predictions about how populations might respond to future climate change depends on an adequate understanding of important ecological processes at an appropriate spatial scale. We illustrate this with recent work conducted on the Icelandic population of Black-tailed Godwits Limosa limosa islandica that shows large-scale regulatory processes. Most predictive models to date have focused on local populations (single estuary or a group of neighbouring estuaries). We discuss the role such models might play in risk assessment, and the need for them to be linked to larger-scale ecological processes. We argue that future work needs to focus on spatial scale issues and on linking physical models of coastal environments with important ecological processes.
Resumo:
Considerable attention has been given to the impact of climate change on avian populations over the last decade. In this paper we examine two issues with respect to coastal bird populations in the UK: (1) is there any evidence that current populations are declining due to climate change, and (2) how might we predict the response of populations in the future? We review the cause of population decline in two species associated with saltmarsh habitats. The abundance of Common Redshank Tringa totanus breeding on saltmarsh declined by about 23% between the mid-1980s and mid-1990s, but the decline appears to have been caused by an increase in grazing pressure. The number of Twite Carduelis flavirostris wintering on the coast of East Anglia has declined dramatically over recent decades; there is evidence linking this decline with habitat loss but a causal role for climate change is unclear. These examples illustrate that climate change could be having population-level impacts now, but also show that it is dangerous to become too narrowly focused on single issues affecting coastal birds. Making predictions about how populations might respond to future climate change depends on an adequate understanding of important ecological processes at an appropriate spatial scale. We illustrate this with recent work conducted on the Icelandic population of Black-tailed Godwits Limosa limosa islandica that shows large-scale regulatory processes. Most predictive models to date have focused on local populations (single estuary or a group of neighbouring estuaries). We discuss the role such models might play in risk assessment, and the need for them to be linked to larger-scale ecological processes. We argue that future work needs to focus on spatial scale issues and on linking physical models of coastal environments with important ecological processes.
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.
Resumo:
Acquiring a mechanistic understanding of the role of the biotic feedbacks on the links between atmospheric CO2 concentrations and temperature is essential for trustworthy climate predictions. Currently, computer based simulations are the only available tool to estimate the global impact of the biotic feedbacks on future atmospheric CO2 and temperatures. Here we propose an alternative and complementary approaches by using materially closed and energetically open analogue/physical models of the carbon cycle. We argue that there is potential in using a materially closed approach to improve our understanding of the magnitude and sign of many biotic feedbacks, and that recent technological advance make this feasible. We also suggest how such systems could be designed and discuss the advantages and limitations of establishing physical models of the global carbon cycle.
Resumo:
The peroxisomal proliferating-activated receptors (PPARs) are lipid-sensing transcription factors that have a role in embryonic development, but are primarily known for modulating energy metabolism, lipid storage, and transport, as well as inflammation and wound healing. Currently, there is no consensus as to the overall combined function of PPARs and why they evolved. We hypothesize that the PPARs had to evolve to integrate lipid storage and burning with the ability to reduce oxidative stress, as energy storage is essential for survival and resistance to injury/infection, but the latter increases oxidative stress and may reduce median survival (functional longevity). In a sense, PPARs may be an evolutionary solution to something we call the 'hypoxia-lipid' conundrum, where the ability to store and burn fat is essential for survival, but is a 'double-edged sword', as fats are potentially highly toxic. Ways in which PPARs may reduce oxidative stress involve modulation of mitochondrial uncoupling protein (UCP) expression (thus reducing reactive oxygen species, ROS), optimising forkhead box class O factor (FOXO) activity (by improving whole body insulin sensitivity) and suppressing NFkB (at the transcriptional level). In light of this, we therefore postulate that inflammation-induced PPAR downregulation engenders many of the signs and symptoms of the metabolic syndrome, which shares many features with the acute phase response (APR) and is the opposite of the phenotype associated with calorie restriction and high FOXO activity. In genetically susceptible individuals (displaying the naturally mildly insulin resistant 'thrifty genotype'), suboptimal PPAR activity may follow an exaggerated but natural adipose tissue-related inflammatory signal induced by excessive calories and reduced physical activity, which normally couples energy storage with the ability to mount an immune response. This is further worsened when pancreatic decompensation occurs, resulting in gluco-oxidative stress and lipotoxicity, increased inflammatory insulin resistance and oxidative stress. Reactivating PPARs may restore a metabolic balance and help to adapt the phenotype to a modern lifestyle.
Resumo:
We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.
Resumo:
The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.