66 resultados para Model of the semantic fields
Resumo:
The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.
Resumo:
FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
ERA-Interim is the latest global atmospheric reanalysis produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). The ERA-Interim project was conducted in part to prepare for a new atmospheric reanalysis to replace ERA-40, which will extend back to the early part of the twentieth century. This article describes the forecast model, data assimilation method, and input datasets used to produce ERA-Interim, and discusses the performance of the system. Special emphasis is placed on various difficulties encountered in the production of ERA-40, including the representation of the hydrological cycle, the quality of the stratospheric circulation, and the consistency in time of the reanalysed fields. We provide evidence for substantial improvements in each of these aspects. We also identify areas where further work is needed and describe opportunities and objectives for future reanalysis projects at ECMWF
Resumo:
Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multi-stage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets where they are finish fried. The initial blanching, treatment in glucose solution and par-frying steps are crucial since they determine the levels of precursors present at the beginning of the finish frying process. In order to minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat and color were monitored at time intervals during the frying of potato strips which had been dipped in varying concentrations of glucose and fructose during a typical pretreatment. A mathematical model of the finish-frying was developed based on the fundamental chemical reaction pathways, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide, and accurately predicted the acrylamide content of the final fries.
Resumo:
The Canadian Middle Atmosphere Modelling (MAM) project is a collaboration between thé Atmospheric Environment Service (AES) of Environment Canada and several Canadian universities. Its goal is thé development of a comprehensive General Circulation Model of the troposphere-stratosphere-mesosphere System, starting from the AES/CCCma third-generation atmospheric General Circulation Model. This paper describes the basic features of the first-generation Canadian MAM and some aspects of its radiative-dynamical climatology. Standard first-order mean diagnostics are presented for monthly means and for the annual cycle of zonal-mean winds and temperatures. The mean meridional circulation is examined, and comparison is made between thé steady diabatic, downward controlled, and residual stream functions. It is found that downward control holds quite well in the monthly mean through most of the middle atmosphere, even during equinoctal periods. The relative roles of different drag processes in determining the mean downwelling over the wintertime polar middle stratosphere is examined, and the vertical structure of the drag is quantified.
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux are considered for single magnetic bipoles as they are transported across the Sun. The effects of magnetic flux transport on the radial field at the surface of the Sun are modeled numerically by developing earlier work by Wang, Sheeley, and Lean (2000). The paper considers how the initial tilt of the bipole axis (α) and its latitude of emergence affect the variation and magnitude of the surface and open magnetic flux. The amount of open magnetic flux is estimated by constructing potential coronal fields. It is found that the open flux may evolve independently from the surface field for certain ranges of the tilt angle. For a given tilt angle, the lower the latitude of emergence, the higher the magnitude of the surface and open flux at the end of the simulation. In addition, three types of behavior are found for the open flux depending on the initial tilt angle of the bipole axis. When the tilt is such that α ≥ 2◦ the open flux is independent of the surface flux and initially increases before decaying away. In contrast, for tilt angles in the range −16◦ < α < 2◦ the open flux follows the surface flux and continually decays. Finally, for α ≤ −16◦ the open flux first decays and then increases in magnitude towards a second maximum before decaying away. This behavior of the open flux can be explained in terms of two competing effects produced by differential rotation. Firstly, differential rotation may increase or decrease the open flux by rotating the centers of each polarity of the bipole at different rates when the axis has tilt. Secondly, it decreases the open flux by increasing the length of the polarity inversion line where flux cancellation occurs. The results suggest that, in order to reproduce a realistic model of the Sun’s open magnetic flux over a solar cycle, it is important to have accurate input data on the latitude of emergence of bipoles along with the variation of their tilt angles as the cycle progresses.
Resumo:
Calculations using a numerical model of the convection dominated high latitude ionosphere are compared with observations made by EISCAT as part of the UK-POLAR Special Programme. The data used were for 24–25 October 1984, which was characterized by an unusually steady IMF, with Bz < 0 and By > 0; in the calculations it was assumed that a steady IMF implies steady convection conditions. Using the electric field models of Heppner and Maynard (1983) appropriate to By > 0 and precipitation data taken from Spiroet al. (1982), we calculated the velocities and electron densities appropriate to the EISCAT observations. Many of the general features of the velocity data were reproduced by the model. In particular, the phasing of the change from eastward to westward flow in the vicinity of the Harang discontinuity, flows near the dayside throat and a region of slow flow at higher latitudes near dusk were well reproduced. In the afternoon sector modelled velocity values were significantly less than those observed. Electron density calculations showed good agreement with EISCAT observations near the F-peak, but compared poorly with observations near 211 km. In both cases, the greatest disagreement occurred in the early part of the observations, where the convection pattern was poorly known and showed some evidence of long term temporal change. Possible causes for the disagreement between observations and calculations are discussed and shown to raise interesting and, as yet, unresolved questions concerning the interpretation of the data. For the data set used, the late afternoon dip in electron density observed near the F-peak and interpreted as the signature of the mid-latitude trough is well reproduced by the calculations. Calculations indicate that it does not arise from long residence times of plasma on the nightside, but is the signature of a gap between two major ionization sources, viz. photoionization and particle precipitation.
Resumo:
The no response test is a new scheme in inverse problems for partial differential equations which was recently proposed in [D. R. Luke and R. Potthast, SIAM J. Appl. Math., 63 (2003), pp. 1292–1312] in the framework of inverse acoustic scattering problems. The main idea of the scheme is to construct special probing waves which are small on some test domain. Then the response for these waves is constructed. If the response is small, the unknown object is assumed to be a subset of the test domain. The response is constructed from one, several, or many particular solutions of the problem under consideration. In this paper, we investigate the convergence of the no response test for the reconstruction information about inclusions D from the Cauchy values of solutions to the Helmholtz equation on an outer surface $\partial\Omega$ with $\overline{D} \subset \Omega$. We show that the one‐wave no response test provides a criterion to test the analytic extensibility of a field. In particular, we investigate the construction of approximations for the set of singular points $N(u)$ of the total fields u from one given pair of Cauchy data. Thus, the no response test solves a particular version of the classical Cauchy problem. Also, if an infinite number of fields is given, we prove that a multifield version of the no response test reconstructs the unknown inclusion D. This is the first convergence analysis which could be achieved for the no response test.
Resumo:
The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.
Resumo:
This paper describes the results and conclusions of the INCA (Integrated Nitrogen Model for European CAtchments) project and sets the findings in the context of the ELOISE (European Land-Ocean Interaction Studies) programme. The INCA project was concerned with the development of a generic model of the major factors and processes controlling nitrogen dynamics in European river systems, thereby providing a tool (a) to aid the scientific understanding of nitrogen transport and retention in catchments and (b) for river-basin management and policy-making. The findings of the study highlight the heterogeneity of the factors and processes controlling nitrogen dynamics in freshwater systems. Nonetheless, the INCA model was able to simulate the in-stream nitrogen concentrations and fluxes observed at annual and seasonal timescales in Arctic, Continental and Maritime-Temperate regimes. This result suggests that the data requirements and structural complexity of the INCA model are appropriate to simulate nitrogen fluxes across a wide range of European freshwater environments. This is a major requirement for the production of coupled fiver-estuary-coastal shelf models for the management of our aquatic environment. With regard to river-basin management, to achieve an efficient reduction in nutrient fluxes from the land to the estuarine and coastal zone, the model simulations suggest that management options must be adaptable to the prevailing environmental and socio-economic factors in individual catchments: 'Blanket approaches' to environmental policy appear too simple. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The influence of orography on the structure of stationary planetary Rossby waves is studied in the context of a contour dynamics model of the large-scale atmospheric flow. Orography of infinitesimal and finite amplitude is studied using analytical and numerical techniques. Three different types of orography are considered: idealized orography in the form of a global wave, idealized orography in the form of a local table mountain, and the earth's orography. The study confirms the importance of resonances, both in the infinitesimal orography and in the finite orography cases. With finite orography the stationary waves organize themselves into a one-dimensional set of solutions, which due to the resonances, is piecewise connected. It is pointed out that these stationary waves could be relevant for atmospheric regimes.
Resumo:
In this paper, we generalise a previously-described model of the error-prone polymerase chain reaction (PCR) reaction to conditions of arbitrarily variable amplification efficiency and initial population size. Generalisation of the model to these conditions improves the correspondence to observed and expected behaviours of PCR, and restricts the extent to which the model may explore sequence space for a prescribed set of parameters. Error-prone PCR in realistic reaction conditions is predicted to be less effective at generating grossly divergent sequences than the original model. The estimate of mutation rate per cycle by sampling sequences from an in vitro PCR experiment is correspondingly affected by the choice of model and parameters. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.