27 resultados para Cutting temperature modeling
em CentAUR: Central Archive University of Reading - UK
Resumo:
Response surface methodology was used to study the effect of temperature, cutting time, and calcium chloride addition level on curd moisture content, whey fat losses, and curd yield. Coagulation and syneresis were continuously monitored using 2 optical sensors detecting light backscatter. The effect of the factors on the sensors’ response was also examined. Retention of fat during cheese making was found to be a function of cutting time and temperature, whereas curd yield was found to be a function of those 2 factors and the level of calcium chloride addition. The main effect of temperature on curd moisture was to increase the rate at which whey was expelled. Temperature and calcium chloride addition level were also found to affect the light backscatter profile during coagulation whereas the light backscatter profile during syneresis was a function of temperature and cutting time. The results of this study suggest that there is an optimum firmness at which the gel should be cut to achieve maximum retention of fat and an optimum curd moisture content to maximize product yield and quality. It was determined that to maximize curd yield and quality, it is necessary to maximize firmness while avoiding rapid coarsening of the gel network and microsyneresis. These results could contribute to the optimization of the cheese-making process.
Resumo:
The retrieval (estimation) of sea surface temperatures (SSTs) from space-based infrared observations is increasingly performed using retrieval coefficients derived from radiative transfer simulations of top-of-atmosphere brightness temperatures (BTs). Typically, an estimate of SST is formed from a weighted combination of BTs at a few wavelengths, plus an offset. This paper addresses two questions about the radiative transfer modeling approach to deriving these weighting and offset coefficients. How precisely specified do the coefficients need to be in order to obtain the required SST accuracy (e.g., scatter <0.3 K in week-average SST, bias <0.1 K)? And how precisely is it actually possible to specify them using current forward models? The conclusions are that weighting coefficients can be obtained with adequate precision, while the offset coefficient will often require an empirical adjustment of the order of a few tenths of a kelvin against validation data. Thus, a rational approach to defining retrieval coefficients is one of radiative transfer modeling followed by offset adjustment. The need for this approach is illustrated from experience in defining SST retrieval schemes for operational meteorological satellites. A strategy is described for obtaining the required offset adjustment, and the paper highlights some of the subtler aspects involved with reference to the example of SST retrievals from the imager on the geostationary satellite GOES-8.
Resumo:
We have applied a combination of spectroscopic and diffraction methods to study the adduct formed between squaric acid and bypridine, which has been postulated to exhibit proton transfer associated with a single-crystal to single-crystal phase transition at ca. 450 K. A combination of X-ray single-crystal and very-high flux powder neutron diffraction data confirmed that a proton does transfer from the acid to the base in the high-temperature form. Powder X-ray diffraction measurements demonstrated that the transition was reversible but that a significant kinetic energy barrier must be overcome to revert to the original structure. Computational modeling is consistent with these results. Modeling also revealed that, while the proton transfer event would be strongly discouraged in the gas phase, it occurs in the solid state due to the increase in charge state of the molecular ions and their arrangement inside the lattice. The color change is attributed to a narrowing of the squaric acid to bipyridine charge-transfer energy gap. Finally, evidence for the possible existence of two further phases at high pressure is also presented.
Resumo:
Investigation of preferred structures of planetary wave dynamics is addressed using multivariate Gaussian mixture models. The number of components in the mixture is obtained using order statistics of the mixing proportions, hence avoiding previous difficulties related to sample sizes and independence issues. The method is first applied to a few low-order stochastic dynamical systems and data from a general circulation model. The method is next applied to winter daily 500-hPa heights from 1949 to 2003 over the Northern Hemisphere. A spatial clustering algorithm is first applied to the leading two principal components (PCs) and shows significant clustering. The clustering is particularly robust for the first half of the record and less for the second half. The mixture model is then used to identify the clusters. Two highly significant extratropical planetary-scale preferred structures are obtained within the first two to four EOF state space. The first pattern shows a Pacific-North American (PNA) pattern and a negative North Atlantic Oscillation (NAO), and the second pattern is nearly opposite to the first one. It is also observed that some subspaces show multivariate Gaussianity, compatible with linearity, whereas others show multivariate non-Gaussianity. The same analysis is also applied to two subperiods, before and after 1978, and shows a similar regime behavior, with a slight stronger support for the first subperiod. In addition a significant regime shift is also observed between the two periods as well as a change in the shape of the distribution. The patterns associated with the regime shifts reflect essentially a PNA pattern and an NAO pattern consistent with the observed global warming effect on climate and the observed shift in sea surface temperature around the mid-1970s.
Resumo:
A modeling Study was carried out into pea-barley intercropping in northern Europe. The two objectives were (a) to compare pea-barley intercropping to sole cropping in terms of grain and nitrogen yield amounts and stability, and (b) to explore options for managing pea-barley intercropping systems in order to maximize the biomass produced and the grain and nitrogen yields according to the available resources, such as light, water and nitrogen. The study consisted of simulations taking into account soil and weather variability among three sites located in northern European Countries (Denmark, United Kingdom and France), and using 10 years of weather records. A preliminary stage evaluated the STICS intercrop model's ability to predict grain and nitrogen yields of the two species, using a 2-year dataset from trials conducted at the three sites. The work was carried out in two phases, (a) the model was run to investigate the potentialities of intercrops as compared to sole crops, and (b) the model was run to explore options for managing pea-barley intercropping, asking the following three questions: (i) in order to increase light capture, Would it be worth delaying the sowing dates of one species? (ii) How to manage sowing density and seed proportion of each species in the intercrop to improve total grain yield and N use efficiency? (iii) How to optimize the use of nitrogen resources by choosing the most suitable preceding crop and/or the most appropriate soil? It was found that (1) intercropping made better use of environmental resources as regards yield amount and stability than sole cropping, with a noticeable site effect, (2) pea growth in intercrops was strongly linked to soil moisture, and barley yield was determined by nitrogen uptake and light interception due to its height relative to pea, (3) sowing barley before pea led to a relative grain yield reduction averaged over all three sites, but sowing strategy must be adapted to the location, being dependent on temperature and thus latitude, (4) density and species proportions had a small effect on total grain yield, underlining the interspecific offset in the use of environmental growth resources which led to similar total grain yields whatever the pea-barley design, and (5) long-term strategies including mineralization management through organic residue supply and rotation management were very valuable, always favoring intercrop total grain yield and N accumulation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Two models for predicting Septoria tritici on winter wheat (cv. Ri-band) were developed using a program based on an iterative search of correlations between disease severity and weather. Data from four consecutive cropping seasons (1993/94 until 1996/97) at nine sites throughout England were used. A qualitative model predicted the presence or absence of Septoria tritici (at a 5% severity threshold within the top three leaf layers) using winter temperature (January/February) and wind speed to about the first node detectable growth stage. For sites above the disease threshold, a quantitative model predicted severity of Septoria tritici using rainfall during stern elongation. A test statistic was derived to test the validity of the iterative search used to obtain both models. This statistic was used in combination with bootstrap analyses in which the search program was rerun using weather data from previous years, therefore uncorrelated with the disease data, to investigate how likely correlations such as the ones found in our models would have been in the absence of genuine relationships.
Resumo:
Time-resolved studies of germylene, GeH2, generated by the 193 nm laser flash photolysis of 3,4-dimethyl-1-germacyclopent-3-ene, have been carried out to obtain rate constants for its bimolecular reactions with ethyl- and diethylgermanes in the gas phase. The reactions were studied over the pressure range 1-100 Torr with SF6 as bath gas and at five temperatures in the range 297-564 K. Only slight pressure dependences were found for GeH2 + EtGeH3 (399, 486, and 564 K). The high pressure rate constants gave the following Arrhenius parameters: for GeH2 + EtGeH3, log A = -10.75 +/- 0.08 and E-a = -6.7 +/- 0.6 kJ mol(-1); for GeH2 + Et2GeH2, log A = -10.68 +/- 0.11 and E-a = -6.95 +/- 0.80 kJ mol(-1). These are consistent with fast, near collision-controlled, association processes at 298 K. RRKM modeling calculations are, for the most part, consistent with the observed pressure dependence of GeH2 + EtGeH3. The ethyl substituent effects have been extracted from these results and are much larger than the analogous methyl substituent effects in the SiH2 + methylsilane reaction series. This is consistent with a mechanistic model for Ge-H insertion in which the intermediate complex has a sizable secondary barrier to rearrangement.
Resumo:
Polycondensation of 2,6-dihydroxynaphthalene with 4,4'-bis(4"-fluorobenzoyl)biphenyl affords a novel, semicrystalline poly(ether ketone) with a melting point of 406 degreesC and glass transition temperature (onset) of 168 degreesC. Molecular modeling and diffraction-simulation studies of this polymer, coupled with data from the single-crystal structure of an oligomer model, have enabled the crystal and molecular structure of the polymer to be determined from X-ray powder data. This structure-the first for any naphthalene-containing poly(ether ketone)-is fully ordered, in monoclinic space group P2(1)/b, with two chains per unit cell. Rietveld refinement against the experimental powder data gave a final agreement factor (R-wp) of 6.7%.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
Quantitative control of aroma generation during the Maillard reaction presents great scientific and industrial interest. Although there have been many studies conducted in simplified model systems, the results are difficult to apply to complex food systems, where the presence of other components can have a significant impact. In this work, an aqueous extract of defatted beef liver was chosen as a simplified food matrix for studying the kinetics of the Mallard reaction. Aliquots of the extract were heated under different time and temperature conditions and analyzed for sugars, amino acids, and methylbutanals, which are important Maillard-derived aroma compounds formed in cooked meat. Multiresponse kinetic modeling, based on a simplified mechanistic pathway, gave a good fit with the experimental data, but only when additional steps were introduced to take into account the interactions of glucose and glucose-derived intermediates with protein and other amino compounds. This emphasizes the significant role of the food matrix in controlling the Maillard reaction.
Resumo:
Strong vertical gradients at the top of the atmospheric boundary layer affect the propagation of electromagnetic waves and can produce radar ducts. A three-dimensional, time-dependent, nonhydrostatic numerical model was used to simulate the propagation environment in the atmosphere over the Persian Gulf when aircraft observations of ducting had been made. A division of the observations into high- and low-wind cases was used as a framework for the simulations. Three sets of simulations were conducted with initial conditions of varying degrees of idealization and were compared with the observations taken in the Ship Antisubmarine Warfare Readiness/Effectiveness Measuring (SHAREM-115) program. The best results occurred with the initialization based on a sounding taken over the coast modified by the inclusion of data on low-level atmospheric conditions over the Gulf waters. The development of moist, cool, stable marine internal boundary layers (MIBL) in air flowing from land over the waters of the Gulf was simulated. The MIBLs were capped by temperature inversions and associated lapses of humidity and refractivity. The low-wind MIBL was shallower and the gradients at its top were sharper than in the high-wind case, in agreement with the observations. Because it is also forced by land–sea contrasts, a sea-breeze circulation frequently occurs in association with the MIBL. The size, location, and internal structure of the sea-breeze circulation were realistically simulated. The gradients of temperature and humidity that bound the MIBL cause perturbations in the refractivity distribution that, in turn, lead to trapping layers and ducts. The existence, location, and surface character of the ducts were well captured. Horizontal variations in duct characteristics due to the sea-breeze circulation were also evident. The simulations successfully distinguished between high- and low-wind occasions, a notable feature of the SHAREM-115 observations. The modeled magnitudes of duct depth and strength, although leaving scope for improvement, were most encouraging.
Resumo:
We review the scientific literature since the 1960s to examine the evolution of modeling tools and observations that have advanced understanding of global stratospheric temperature changes. Observations show overall cooling of the stratosphere during the period for which they are available (since the late 1950s and late 1970s from radiosondes and satellites, respectively), interrupted by episodes of warming associated with volcanic eruptions, and superimposed on variations associated with the solar cycle. There has been little global mean temperature change since about 1995. The temporal and vertical structure of these variations are reasonably well explained bymodels that include changes in greenhouse gases, ozone, volcanic aerosols, and solar output, although there are significant uncertainties in the temperature observations and regarding the nature and influence of past changes in stratospheric water vapor. As a companion to a recent WIREs review of tropospheric temperature trends, this article identifies areas of commonality and contrast between the tropospheric and stratospheric trend literature. For example, the increased attention over time to radiosonde and satellite data quality has contributed to better characterization of uncertainty in observed trends both in the troposphere and in the lower stratosphere, and has highlighted the relative deficiency of attention to observations in the middle and upper stratosphere. In contrast to the relatively unchanging expectations of surface and tropospheric warming primarily induced by greenhouse gas increases, stratospheric temperature change expectations have arisen from experiments with a wider variety of model types, showingmore complex trend patterns associated with a greater diversity of forcing agents.