121 resultados para finite and infinitesimal models
Resumo:
The and RT0 finite element schemes are among the most promising low order elements for use in unstructured mesh marine and lake models. They are both free of spurious elevation modes, have good dispersive properties and have a relatively low computational cost. In this paper, we derive both finite element schemes in the same unified framework and discuss their respective qualities in terms of conservation, consistency, propagation factor and convergence rate. We also highlight the impact that the local variables placement can have on the model solution. The main conclusion that we can draw is that the choice between elements is highly application dependent. We suggest that the element is better suited to purely hydrodynamical applications while the RT0 element might perform better for hydrological applications that require scalar transport calculations.
Resumo:
Finite computing resources limit the spatial resolution of state-of-the-art global climate simulations to hundreds of kilometres. In neither the atmosphere nor the ocean are small-scale processes such as convection, clouds and ocean eddies properly represented. Climate simulations are known to depend, sometimes quite strongly, on the resulting bulk-formula representation of unresolved processes. Stochastic physics schemes within weather and climate models have the potential to represent the dynamical effects of unresolved scales in ways which conventional bulk-formula representations are incapable of so doing. The application of stochastic physics to climate modelling is a rapidly advancing, important and innovative topic. The latest research findings are gathered together in the Theme Issue for which this paper serves as the introduction.
Resumo:
There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.
Resumo:
General circulation models (GCMs) use the laws of physics and an understanding of past geography to simulate climatic responses. They are objective in character. However, they tend to require powerful computers to handle vast numbers of calculations. Nevertheless, it is now possible to compare results from different GCMs for a range of times and over a wide range of parameterisations for the past, present and future (e.g. in terms of predictions of surface air temperature, surface moisture, precipitation, etc.). GCMs are currently producing simulated climate predictions for the Mesozoic, which compare favourably with the distributions of climatically sensitive facies (e.g. coals, evaporites and palaeosols). They can be used effectively in the prediction of oceanic upwelling sites and the distribution of petroleum source rocks and phosphorites. Models also produce evaluations of other parameters that do not leave a geological record (e.g. cloud cover, snow cover) and equivocal phenomena such as storminess. Parameterisation of sub-grid scale processes is the main weakness in GCMs (e.g. land surfaces, convection, cloud behaviour) and model output for continental interiors is still too cold in winter by comparison with palaeontological data. The sedimentary and palaeontological record provides an important way that GCMs may themselves be evaluated and this is important because the same GCMs are being used currently to predict possible changes in future climate. The Mesozoic Earth was, by comparison with the present, an alien world, as we illustrate here by reference to late Triassic, late Jurassic and late Cretaceous simulations. Dense forests grew close to both poles but experienced months-long daylight in warm summers and months-long darkness in cold snowy winters. Ocean depths were warm (8 degrees C or more to the ocean floor) and reefs, with corals, grew 10 degrees of latitude further north and south than at the present time. The whole Earth was warmer than now by 6 degrees C or more, giving more atmospheric humidity and a greatly enhanced hydrological cycle. Much of the rainfall was predominantly convective in character, often focused over the oceans and leaving major desert expanses on the continental areas. Polar ice sheets are unlikely to have been present because of the high summer temperatures achieved. The model indicates extensive sea ice in the nearly enclosed Arctic seaway through a large portion of the year during the late Cretaceous, and the possibility of sea ice in adjacent parts of the Midwest Seaway over North America. The Triassic world was a predominantly warm world, the model output for evaporation and precipitation conforming well with the known distributions of evaporites, calcretes and other climatically sensitive facies for that time. The message from the geological record is clear. Through the Phanerozoic, Earth's climate has changed significantly, both on a variety of time scales and over a range of climatic states, usually baldly referred to as "greenhouse" and "icehouse", although these terms disguise more subtle states between these extremes. Any notion that the climate can remain constant for the convenience of one species of anthropoid is a delusion (although the recent rate of climatic change is exceptional). (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
A time series of the observed transport through an array of moorings across the Mozambique Channel is compared with that of six model runs with ocean general circulation models. In the observations, the seasonal cycle cannot be distinguished from red noise, while this cycle is dominant in the transport of the numerical models. It is found, however, that the seasonal cycles of the observations and numerical models are similar in strength and phase. These cycles have an amplitude of 5 Sv and a maximum in September, and can be explained by the yearly variation of the wind forcing. The seasonal cycle in the models is dominant because the spectral density at other frequencies is underrepresented. Main deviations from the observations are found at depths shallower than 1500 m and in the 5/y–6/y frequency range. Nevertheless, the structure of eddies in the models is close to the observed eddy structure. The discrepancy is found to be related to the formation mechanism and the formation position of the eddies. In the observations, eddies are frequently formed from an overshooting current near the mooring section, as proposed by Ridderinkhof and de Ruijter (2003) and Harlander et al. (2009). This causes an alternation of events at the mooring section, varying between a strong southward current, and the formation and passing of an eddy. This results in a large variation of transport in the frequency range of 5/y–6/y. In the models, the eddies are formed further north and propagate through the section. No alternation similar to the observations is observed, resulting in a more constant transport.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
The presented study examined the opinion of in-service and prospective chemistry teachers about the importance of usage of molecular and crystal models in secondary-level school practice, and investigated some of the reasons for their (non-) usage. The majority of participants stated that the use of models plays an important role in chemistry education and that they would use them more often if the circumstances were more favourable. Many teachers claimed that three-dimensional (3d) models are still not available in sufficient number at their schools; they also pointed to the lack of available computer facilities during chemistry lessons. The research revealed that, besides the inadequate material circumstances, less than one third of participants are able to use simple (freeware) computer programs for drawing molecular structures and their presentation in virtual space; however both groups of teachers expressed the willingness to improve their knowledge in the subject area. The investigation points to several actions which could be undertaken to improve the current situation.
Resumo:
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
Wave-activity conservation laws are key to understanding wave propagation in inhomogeneous environments. Their most general formulation follows from the Hamiltonian structure of geophysical fluid dynamics. For large-scale atmospheric dynamics, the Eliassen–Palm wave activity is a well-known example and is central to theoretical analysis. On the mesoscale, while such conservation laws have been worked out in two dimensions, their application to a horizontally homogeneous background flow in three dimensions fails because of a degeneracy created by the absence of a background potential vorticity gradient. Earlier three-dimensional results based on linear WKB theory considered only Doppler-shifted gravity waves, not waves in a stratified shear flow. Consideration of a background flow depending only on altitude is motivated by the parameterization of subgrid-scales in climate models where there is an imposed separation of horizontal length and time scales, but vertical coupling within each column. Here we show how this degeneracy can be overcome and wave-activity conservation laws derived for three-dimensional disturbances to a horizontally homogeneous background flow. Explicit expressions for pseudoenergy and pseudomomentum in the anelastic and Boussinesq models are derived, and it is shown how the previously derived relations for the two-dimensional problem can be treated as a limiting case of the three-dimensional problem. The results also generalize earlier three-dimensional results in that there is no slowly varying WKB-type requirement on the background flow, and the results are extendable to finite amplitude. The relationship A E =cA P between pseudoenergy A E and pseudomomentum A P, where c is the horizontal phase speed in the direction of symmetry associated with A P, has important applications to gravity-wave parameterization and provides a generalized statement of the first Eliassen–Palm theorem.
Resumo:
The etiology of colorectal cancer (CRC), a common cause of cancer-related mortality globally, has strong associations with diet. There is considerable epidemiological evidence that fruits and vegetables are associated with reduced risk of CRC. This paper reviews the extensive evidence, both from in vitro studies and animal models, that components of berry fruits can modulate biomarkers of DNA damage and that these effects may be potentially chemoprotective, given the likely role that oxidative damage plays in mutation rate and cancer risk. Human intervention trials with berries are generally consistent in indicating a capacity to significantly decrease oxidative damage to DNA, but represent limited evidence for anticarcinogenicity, relying as they do on surrogate risk markers. To understand the effects of berry consumption on colorectal cancer risk, future studies will need to be well controlled, with defined berry extracts, using suitable and clinically relevant end points and considering the importance of the gut microbiota.
Resumo:
There is ongoing work on conceptual modelling of such busi- ness notions as Affordance and Capability. We have found that such business notions as Affordance and Capability are constructively defned using elements and properties of exe- cutable behaviour models. In this paper, we clarify the def- initions of Affordance and Capability using Coloured Petri Nets and Protocol models.The illustrating case is the process of drug injection. We show that different behaviour modelling techniques provide different precision for definition of Affordance and Capability and clarify the conceptual models of these notions. We generalise that the behaviour models can be used to improve the precision of conceptualization.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.