963 resultados para Internal working models
Resumo:
Sea ice contains flaws including frictional contacts. We aim to describe quantitatively the mechanics of those contacts, providing local physics for geophysical models. With a focus on the internal friction of ice, we review standard micro-mechanical models of friction. The solid's deformation under normal load may be ductile or elastic. The shear failure of the contact may be by ductile flow, brittle fracture, or melting and hydrodynamic lubrication. Combinations of these give a total of six rheological models. When the material under study is ice, several of the rheological parameters in the standard models are not constant, but depend on the temperature of the bulk, on the normal stress under which samples are pressed together, or on the sliding velocity and acceleration. This has the effect of making the shear stress required for sliding dependent on sliding velocity, acceleration, and temperature. In some cases, it also perturbs the exponent in the normal-stress dependence of that shear stress away from the value that applies to most materials. We unify the models by a principle of maximum displacement for normal deformation, and of minimum stress for shear failure, reducing the controversy over the mechanism of internal friction in ice to the choice of values of four parameters in a single model. The four parameters represent, for a typical asperity contact, the sliding distance required to expel melt-water, the sliding distance required to break contact, the normal strain in the asperity, and the thickness of any ductile shear zone.
Resumo:
This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.
Resumo:
The relationship between working memory (WM) and attention is a highly interdependent one, with evidence that attention determines the state in which items in WM are retained. Through focusing of attention, an item might be held in a more prioritized state, commonly termed as the focus of attention (FOA). The remaining items, although still retrievable, are considered to be in a different representational state. One means to bring an item into the FOA is to use retrospective cues (‘retro-cues’) which direct attention to one of the objects retained in WM. Alternatively, an item can enter a privileged state once attention is directed towards it through bottom-up influences (e.g. recency effect) or by performing an action on one of the retained items (‘incidental’ cueing). In all these cases, the item in the FOA is recalled with better accuracy compared to the other items in WM. Far less is known about the nature of the other items in WM and whether they can be flexibly manipulated in and out of the FOA. We present data from three types of experiments as well as transcranial magnetic stimulation to early visual cortex to manipulate the item inside FOA. Taken together, our results suggest that the context in which items are retained in WM matters. When an item remains behaviourally relevant, despite not being inside the FOA, re-focusing attention upon it can increase its recall precision. This suggests that a non-FOA item can be held in a state in which it can be later retrieved. However, if an item is rendered behaviourally unimportant because it is very unlikely to be probed, it cannot be brought back into the FOA, nor recalled with high precision. Under such conditions, some information appears to be irretrievably lost from WM. These findings, obtained from several different methods, demonstrate quite considerable flexibility with which items in WM can be represented depending upon context. They have important consequences for emerging state-dependent models of WM.
Resumo:
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. This article summarizes published research from the idealized experiments of the Hurricane Working Group of U.S. CLIVAR (CLImate VARiability and predictability of the ocean-atmosphere system). This work, combined with results from other model simulations, has strengthened relationships between tropical cyclone formation rates and climate variables such as mid-tropospheric vertical velocity, with decreased climatological vertical velocities leading to decreased tropical cyclone formation. Systematic differences are shown between experiments in which only sea surface temperature is increased versus experiments where only atmospheric carbon dioxide is increased, with the carbon dioxide experiments more likely to demonstrate the decrease in tropical cyclone numbers previously shown to be a common response of climate models in a warmer climate. Experiments where the two effects are combined also show decreases in numbers, but these tend to be less for models that demonstrate a strong tropical cyclone response to increased sea surface temperatures. Further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.
Resumo:
Observational analyses of running 5-year ocean heat content trends (Ht) and net downward top of atmosphere radiation (N) are significantly correlated (r~0.6) from 1960 to 1999, but a spike in Ht in the early 2000s is likely spurious since it is inconsistent with estimates of N from both satellite observations and climate model simulations. Variations in N between 1960 and 2000 were dominated by volcanic eruptions, and are well simulated by the ensemble mean of coupled models from the Fifth Coupled Model Intercomparison Project (CMIP5). We find an observation-based reduction in N of -0.31±0.21 Wm-2 between 1999 and 2005 that potentially contributed to the recent warming slowdown, but the relative roles of external forcing and internal variability remain unclear. While present-day anomalies of N in the CMIP5 ensemble mean and observations agree, this may be due to a cancellation of errors in outgoing longwave and absorbed solar radiation.
Resumo:
Model projections of heavy precipitation and temperature extremes include large uncertainties. We demonstrate that the disagreement between individual simulations primarily arises from internal variability, whereas models agree remarkably well on the forced signal, the change in the absence of internal variability. Agreement is high on the spatial pattern of the forced heavy precipitation response showing an intensification over most land regions, in particular Eurasia and North America. The forced response of heavy precipitation is even more robust than that of annual mean precipitation. Likewise, models agree on the forced response pattern of hot extremes showing the greatest intensification over midlatitudinal land regions. Thus, confidence in the forced changes of temperature and precipitation extremes in response to a certain warming is high. Although in reality internal variability will be superimposed on that pattern, it is the forced response that determines the changes in temperature and precipitation extremes in a risk perspective.
Resumo:
The North Atlantic Ocean subpolar gyre (NA SPG) is an important region for initialising decadal climate forecasts. Climate model simulations and palaeo climate reconstructions have indicated that this region could also exhibit large, internally generated variability on decadal timescales. Understanding these modes of variability, their consistency across models, and the conditions in which they exist, is clearly important for improving the skill of decadal predictions — particularly when these predictions are made with the same underlying climate models. Here we describe and analyse a mode of internal variability in the NA SPG in a state-of-the-art, high resolution, coupled climate model. This mode has a period of 17 years and explains 15–30% of the annual variance in related ocean indices. It arises due to the advection of heat content anomalies around the NA SPG. Anomalous circulation drives the variability in the southern half of the NA SPG, whilst mean circulation and anomalous temperatures are important in the northern half. A negative feedback between Labrador Sea temperatures/densities and those in the North Atlantic Current is identified, which allows for the phase reversal. The atmosphere is found to act as a positive feedback on to this mode via the North Atlantic Oscillation which itself exhibits a spectral peak at 17 years. Decadal ocean density changes associated with this mode are driven by variations in temperature, rather than salinity — a point which models often disagree on and which we suggest may affect the veracity of the underlying assumptions of anomaly-assimilating decadal prediction methodologies.
Resumo:
A theoretically expected consequence of the intensification of the hydrological cycle under global warming is that on average, wet regions get wetter and dry regions get drier (WWDD). Recent studies, however, have found significant discrepancies between the expected pattern of change and observed changes over land. We assess the WWDD theory in four climate models. We find that the reported discrepancy can be traced to two main issues: (1) unforced internal climate variability strongly affects local wetness and dryness trends and can obscure underlying agreement with WWDD, and (2) dry land regions are not constrained to become drier by enhanced moisture divergence since evaporation cannot exceed precipitation over multiannual time scales. Over land, where the available water does not limit evaporation, a “wet gets wetter” signal predominates. On seasonal time scales, where evaporation can exceed precipitation, trends in wet season becoming wetter and dry season becoming drier are also found.
Resumo:
Background Along the internal carotid artery (ICA), atherosclerotic plaques are often located in its cavernous sinus (parasellar) segments (pICA). Studies indicate that the incidence of pre-atherosclerotic lesions is linked with the complexity of the pICA; however, the pICA shape was never objectively characterized. Our study aims at providing objective mathematical characterizations of the pICA shape. Methods and results Three-dimensional (3D) computer models, reconstructed from contrast enhanced computed tomography (CT) data of 30 randomly selected patients (60 pICAs) were analyzed with modern visualization software and new mathematical algorithms. As objective measures for the pICA shape complexity, we provide calculations of curvature energy, torsion energy, and total complexity of 3D skeletons of the pICA lumen. We further measured the posterior knee of the so-called ""carotid siphon"" with a virtual goniometer and performed correlations between the objective mathematical calculations and the subjective angle measurements. Conclusions Firstly, our study provides mathematical characterizations of the pICA shape, which can serve as objective reference data for analyzing connections between pICA shape complexity and vascular diseases. Secondly, we provide an objective method for creating Such data. Thirdly, we evaluate the usefulness of subjective goniometric measurements of the angle of the posterior knee of the carotid siphon.
Resumo:
Inside the `cavernous sinus` or `parasellar region` the human internal carotid artery takes the shape of a siphon that is twisted and torqued in three dimensions and surrounded by a network of veins. The parasellar section of the internal carotid artery is of broad biological and medical interest, as its peculiar shape is associated with temperature regulation in the brain and correlated with the occurrence of vascular pathologies. The present study aims to provide anatomical descriptions and objective mathematical characterizations of the shape of the parasellar section of the internal carotid artery in human infants and its modifications during ontogeny. Three-dimensional (3D) computer models of the parasellar section of the internal carotid artery of infants were generated with a state-of-the-art 3D reconstruction method and analysed using both traditional morphometric methods and novel mathematical algorithms. We show that four constant, demarcated bends can be described along the infant parasellar section of the internal carotid artery, and we provide measurements of their angles. We further provide calculations of the curvature and torsion energy, and the total complexity of the 3D skeleton of the parasellar section of the internal carotid artery, and compare the complexity of this in infants and adults. Finally, we examine the relationship between shape parameters of the parasellar section of the internal carotid artery in infants, and the occurrence of intima cushions, and evaluate the reliability of subjective angle measurements for characterizing the complexity of the parasellar section of the internal carotid artery in infants. The results can serve as objective reference data for comparative studies and for medical imaging diagnostics. They also form the basis for a new hypothesis that explains the mechanisms responsible for the ontogenetic transformation in the shape of the parasellar section of the internal carotid artery.
Resumo:
The ever-increasing robustness and reliability of flow-simulation methods have consolidated CFD as a major tool in virtually all branches of fluid mechanics. Traditionally, those methods have played a crucial role in the analysis of flow physics. In more recent years, though, the subject has broadened considerably, with the development of optimization and inverse design applications. Since then, the search for efficient ways to evaluate flow-sensitivity gradients has received the attention of numerous researchers. In this scenario, the adjoint method has emerged as, quite possibly, the most powerful tool for the job, which heightens the need for a clear understanding of its conceptual basis. Yet, some of its underlying aspects are still subject to debate in the literature, despite all the research that has been carried out on the method. Such is the case with the adjoint boundary and internal conditions, in particular. The present work aims to shed more light on that topic, with emphasis on the need for an internal shock condition. By following the path of previous authors, the quasi-1D Euler problem is used as a vehicle to explore those concepts. The results clearly indicate that the behavior of the adjoint solution through a shock wave ultimately depends upon the nature of the objective functional.
Resumo:
An international standard, ISO/DP 9459-4 has been proposed to establish a uniform standard of quality for small, factory-made solar heating systerns. In this proposal, system components are tested separatelyand total system performance is calculated using system simulations based on component model parameter values validated using the results from the component tests. Another approach is to test the whole system in operation under representative conditions, where the results can be used as a measure of the general system performance. The advantage of system testing of this form is that it is not dependent on simulations and the possible inaccuracies of the models. Its disadvantage is that it is restricted to the boundary conditions for the test. Component testing and system simulation is flexible, but requires an accurate and reliable simulation model.The heat store is a key component conceming system performance. Thus, this work focuses on the storage system consisting store, electrical auxiliary heater, heat exchangers and tempering valve. Four different storage system configurations with a volume of 750 litre were tested in an indoor system test using a six -day test sequence. A store component test and system simulation was carried out on one of the four configurations, applying the proposed standard for stores, ISO/DP 9459-4A. Three newly developed test sequences for intemalload side heat exchangers, not in the proposed ISO standard, were also carried out. The MULTIPORT store model was used for this work. This paper discusses the results of the indoor system test, the store component test, the validation of the store model parameter values and the system simulations.
Resumo:
This paper studies a special class of vector smooth-transition autoregressive (VSTAR) models that contains common nonlinear features (CNFs), for which we proposed a triangular representation and developed a procedure of testing CNFs in a VSTAR model. We first test a unit root against a stable STAR process for each individual time series and then examine whether CNFs exist in the system by Lagrange Multiplier (LM) test if unit root is rejected in the first step. The LM test has standard Chi-squared asymptotic distribution. The critical values of our unit root tests and small-sample properties of the F form of our LM test are studied by Monte Carlo simulations. We illustrate how to test and model CNFs using the monthly growth of consumption and income data of United States (1985:1 to 2011:11).
Resumo:
This work concerns forecasting with vector nonlinear time series models when errorsare correlated. Point forecasts are numerically obtained using bootstrap methods andillustrated by two examples. Evaluation concentrates on studying forecast equality andencompassing. Nonlinear impulse responses are further considered and graphically sum-marized by highest density region. Finally, two macroeconomic data sets are used toillustrate our work. The forecasts from linear or nonlinear model could contribute usefulinformation absent in the forecasts form the other model.
Resumo:
Location Models are usedfor planning the location of multiple service centers in order to serve a geographicallydistributed population. A cornerstone of such models is the measure of distancebetween the service center and a set of demand points, viz, the location of thepopulation (customers, pupils, patients and so on). Theoretical as well asempirical evidence support the current practice of using the Euclidian distancein metropolitan areas. In this paper, we argue and provide empirical evidencethat such a measure is misleading once the Location Models are applied to ruralareas with heterogeneous transport networks. This paper stems from the problemof finding an optimal allocation of a pre-specified number of hospitals in alarge Swedish region with a low population density. We conclude that the Euclidianand the network distances based on a homogenous network (equal travel costs inthe whole network) give approximately the same optimums. However networkdistances calculated from a heterogeneous network (different travel costs indifferent parts of the network) give widely different optimums when the numberof hospitals increases. In terms ofaccessibility we find that the recent closure of hospitals and the in-optimallocation of the remaining ones has increased the average travel distance by 75%for the population. Finally, aggregation the population misplaces the hospitalsby on average 10 km.