925 resultados para Finite model generation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multistage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets, where they are finish-fried. The initial blanching, treatment in glucose solution, and par-frying steps are crucial because they determine the levels of precursors present at the beginning of the finish-frying process. To minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat, and color were monitored at time intervals during the frying of potato strips that had been dipped in various concentrations of glucose and fructose during a typical pretreatment. A mathematical model based on the fundamental chemical reaction pathways of the finish-frying was developed, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide and accurately predicted the acrylamide content of the final fries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meteorological (met) station data is used as the basis for a number of influential studies into the impacts of the variability of renewable resources. Real turbine output data is not often easy to acquire, whereas meteorological wind data, supplied at a standardised height of 10 m, is widely available. This data can be extrapolated to a standard turbine height using the wind profile power law and used to simulate the hypothetical power output of a turbine. Utilising a number of met sites in such a manner can develop a model of future wind generation output. However, the accuracy of this extrapolation is strongly dependent on the choice of the wind shear exponent alpha. This paper investigates the accuracy of the simulated generation output compared to reality using a wind farm in North Rhins, Scotland and a nearby met station in West Freugh. The results show that while a single annual average value for alpha may be selected to accurately represent the long term energy generation from a simulated wind farm, there are significant differences between simulation and reality on an hourly power generation basis, with implications for understanding the impact of variability of renewables on short timescales, particularly system balancing and the way that conventional generation may be asked to respond to a high level of variable renewable generation on the grid in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the Chemistry‐Climate Model Validation (CCMVal) activity is to improve understanding of chemistry‐climate models (CCMs) through process‐oriented evaluation and to provide reliable projections of stratospheric ozone and its impact on climate. An appreciation of the details of model formulations is essential for understanding how models respond to the changing external forcings of greenhouse gases and ozonedepleting substances, and hence for understanding the ozone and climate forecasts produced by the models participating in this activity. Here we introduce and review the models used for the second round (CCMVal‐2) of this intercomparison, regarding the implementation of chemical, transport, radiative, and dynamical processes in these models. In particular, we review the advantages and problems associated with approaches used to model processes of relevance to stratospheric dynamics and chemistry. Furthermore, we state the definitions of the reference simulations performed, and describe the forcing data used in these simulations. We identify some developments in chemistry‐climate modeling that make models more physically based or more comprehensive, including the introduction of an interactive ocean, online photolysis, troposphere‐stratosphere chemistry, and non‐orographic gravity‐wave deposition as linked to tropospheric convection. The relatively new developments indicate that stratospheric CCM modeling is becoming more consistent with our physically based understanding of the atmosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The redistribution of a finite amount of martian surface dust during global dust storms and in the intervening periods has been modelled in a dust lifting version of the UK Mars General Circulation Model. When using a constant, uniform threshold in the model’s wind stress lifting parameterisation and assuming an unlimited supply of surface dust, multiannual simulations displayed some variability in dust lifting activity from year to year, arising from internal variability manifested in surface wind stress, but dust storms were limited in size and formed within a relatively short seasonal window. Lifting thresholds were then allowed to vary at each model gridpoint, dependent on the rates of emission or deposition of dust. This enhanced interannual variability in dust storm magnitude and timing, such that model storms covered most of the observed ranges in size and initiation date within a single multiannual simulation. Peak storm magnitude in a given year was primarily determined by the availability of surface dust at a number of key sites in the southern hemisphere. The observed global dust storm (GDS) frequency of roughly one in every 3 years was approximately reproduced, but the model failed to generate these GDSs spontaneously in the southern hemisphere, where they have typically been observed to initiate. After several years of simulation, the surface threshold field—a proxy for net change in surface dust density—showed good qualitative agreement with the observed pattern of martian surface dust cover. The model produced a net northward cross-equatorial dust mass flux, which necessitated the addition of an artificial threshold decrease rate in order to allow the continued generation of dust storms over the course of a multiannual simulation. At standard model resolution, for the southward mass flux due to cross-equatorial flushing storms to offset the northward flux due to GDSs on a timescale of ∼3 years would require an increase in the former by a factor of 3–4. Results at higher model resolution and uncertainties in dust vertical profiles mean that quasi-periodic redistribution of dust on such a timescale nevertheless appears to be a plausible explanation for the observed GDS frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of climate change on wind power generation potentials over Europe is investigated by considering ensemble projections from two regional climate models (RCMs) driven by a global climate model (GCM). Wind energy density and its interannual variability are estimated based on hourly near-surface wind speeds. Additionally, the possible impact of climatic changes on the energy output of a sample 2.5-MW turbine is discussed. GCM-driven RCM simulations capture the behavior and variability of current wind energy indices, even though some differences exist when compared with reanalysis-driven RCM simulations. Toward the end of the twenty-first century, projections show significant changes of energy density on annual average across Europe that are substantially stronger in seasonal terms. The emergence time of these changes varies from region to region and season to season, but some long-term trends are already statistically significant in the middle of the twenty-first century. Over northern and central Europe, the wind energy potential is projected to increase, particularly in winter and autumn. In contrast, energy potential over southern Europe may experience a decrease in all seasons except for the Aegean Sea. Changes for wind energy output follow the same patterns but are of smaller magnitude. The GCM/RCM model chains project a significant intensification of both interannual and intra-annual variability of energy density over parts of western and central Europe, thus imposing new challenges to a reliable pan-European energy supply in future decades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rigorous bound is derived which limits the finite-amplitude growth of arbitrary nonzonal disturbances to an unstable baroclinic zonal flow within the context of the two-layer model. The bound is valid for conservative (unforced) flow, as well as for forced-dissipative flow that when the dissipation is proportional to the potential vorticity. The method used to derive the bound relies on the existence of a nonlinear Liapunov (normed) stability theorem for subcritical flows, which is a finite-amplitude generalization of the Charney-Stern theorem. For the special case of the Philips model of baroclinic instability, and in the limit of infinitesimal initial nonzonal disturbance amplitude, an improved form of the bound is possible which states that the potential enstrophy of the nonzonal flow cannot exceed ϵβ2, where ϵ = (U − Ucrit)/Ucrit is the (relative) supereriticality. This upper bound turns out to be extremely similar to the maximum predicted by the weakly nonlinear theory. For unforced flow with ϵ < 1, the bound demonstrates that the nonzonal flow cannot contain all of the potential enstrophy in the system; hence in this range of initial supercriticality the total flow must remain, in a certain sense, “close” to a zonal state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Descent and spreading of high salinity water generated by salt rejection during sea ice formation in an Antarctic coastal polynya is studied using a hydrostatic, primitive equation three-dimensional ocean model called the Proudman Oceanographic Laboratory Coastal Ocean Modeling System (POLCOMS). The shape of the polynya is assumed to be a rectangle 100 km long and 30 km wide, and the salinity flux into the polynya at its surface is constant. The model has been run at high horizontal spatial resolution (500 m), and numerical simulations reveal a buoyancy-driven coastal current. The coastal current is a robust feature and appears in a range of simulations designed to investigate the influence of a sloping bottom, variable bottom drag, variable vertical turbulent diffusivities, higher salinity flux, and an offshore position of the polynya. It is shown that bottom drag is the main factor determining the current width. This coastal current has not been produced with other numerical models of polynyas, which may be because these models were run at coarser resolutions. The coastal current becomes unstable upstream of its front when the polynya is adjacent to the coast. When the polynya is situated offshore, an unstable current is produced from its outset owing to the capture of cyclonic eddies. The effect of a coastal protrusion and a canyon on the current motion is investigated. In particular, due to the convex shape of the coastal protrusion, the current sheds a dipolar eddy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conducted 2 longitudinal meditational studies to test an integrative model of goals, stress and coping, and well‐being. Study 1 documented avoidance personal goals as an antecedent of life stressors and life stressors as a partial mediator of the relation between avoidance goals and longitudinal change in subjective well‐being (SWB). Study 2 fully replicated Study 1 and likewise validated avoidance goals as an antecedent of avoidance coping and avoidance coping as a partial mediator of the relation between avoidance goals and longitudinal change in SWB. It also showed that avoidance coping partially mediates the link between avoidance goals and life stressors and validated a sequential meditational model involving both avoidance coping and life stressors. The aforementioned results held when controlling for social desirability, basic traits, and general motivational dispositions. The findings are discussed with regard to the integration of various strands of research on self‐regulation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A continuum model describing sea ice as a layer of granulated thick ice, consisting of many rigid, brittle floes, intersected by long and narrow regions of thinner ice, known as leads, is developed. We consider the evolution of mesoscale leads, formed under extension, whose lengths span many floes, so that the surrounding ice is treated as a granular plastic. The leads are sufficiently small with respect to basin scales of sea ice deformation that they may be modelled using a continuum approach. The model includes evolution equations for the orientational distribution of leads, their thickness and width expressed through second-rank tensors and terms requiring closures. The closing assumptions are constructed for the case of negligibly small lead ice thickness and the canonical deformation types of pure and simple shear, pure divergence and pure convergence. We present a new continuum-scale sea ice rheology that depends upon the isotropic, material rheology of sea ice, the orientational distribution of lead properties and the thick ice thickness. A new model of lead and thick ice interaction is presented that successfully describes a number of effects: (i) because of its brittle nature, thick ice does not thin under extension and (ii) the consideration of the thick sea ice as a granular material determines finite lead opening under pure shear, when granular dilation is unimportant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the essential ingredients of a new, continuum and anisotropic model of sea-ice dynamics designed for eventual use in climate simulation. These ingredients are a constitutive law for sea-ice stress, relating stress to the material properties of sea ice and to internal variables describing the sea-ice state, and equations describing the evolution of these variables. The sea-ice cover is treated as a densely flawed two-dimensional continuum consisting of a uniform field of thick ice that is uniformly permeated with narrow linear regions of thinner ice called leads. Lead orientation, thickness and width distributions are described by second-rank tensor internal variables: the structure, thickness and width tensors, whose dynamics are governed by corresponding evolution equations accounting for processes such as new lead generation and rotation as the ice cover deforms. These evolution equations contain contractions of higher-order tensor expressions that require closures. We develop a sea-ice stress constitutive law that relates sea-ice stress to the structure tensor, thickness tensor and strain rate. For the special case of empty leads (containing no ice), linear closures are adopted and we present calculations for simple shear, convergence and divergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the effect of using a GARCH filter on the properties of the BDS test statistic as well as a number of other issues relating to the application of the test. It is found that, for certain values of the user-adjustable parameters, the finite sample distribution of the test is far-removed from asymptotic normality. In particular, when data generated from some completely different model class are filtered through a GARCH model, the frequency of rejection of iid falls, often substantially. The implication of this result is that it might be inappropriate to use non-rejection of iid of the standardised residuals of a GARCH model as evidence that the GARCH model ‘fits’ the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transfer of hillslope water to and through the riparian zone forms a research area of importance in hydrological investigations. Numerical modelling schemes offer a way to visualise and quantify first-order controls on catchment runoff response and mixing. We use a two-dimensional Finite Element model to assess the link between model setup decisions (e.g. zero-flux boundary definitions, soil algorithm choice) and the consequential hydrological process behaviour. A detailed understanding of the consequences of model configuration is required in order to produce reliable estimates of state variables. We demonstrate that model configuration decisions can determine effectively the presence or absence of particular hillslope flow processes and, the magnitude and direction of flux at the hillslope–riparian interface. If these consequences are not fully explored for any given scheme and application, the resulting process inference may well be misleading.