49 resultados para Coefficient of determination
em CentAUR: Central Archive University of Reading - UK
Resumo:
Peatland habitats are important carbon stocks that also have the potential to be significant sources of greenhouse gases, particularly when subject to changes such as artificial drainage and application of fertilizer. Models aiming to estimate greenhouse gas release from peatlands require an accurate estimate of the diffusion coefficient of gas transport through soil (Ds). The availability of specific measurements for peatland soils is currently limited. This study measured Ds for a peat soil with an overlying clay horizon and compared values with those from widely available models. The Ds value of a sandy loam reference soil was measured for comparison. Using the Currie (1960) method, Ds was measured between an air-filled porosity (ϵ) range of 0 and 0.5 cm3 cm−3. Values of Ds for the peat cores ranged between 3.2 × 10−4 and 4.4 × 10−3 m2 hour−1, for loamy clay cores between 0 and 4.7 × 10−3 m2 hour−1 and for the sandy reference soil they were between 5.4 × 10−4 and 3.4 × 10−3 m2 hour−1. The agreement of measured and modelled values of relative diffusivity (Ds/D0, with D0 the diffusion coefficient through free air) varied with soil type; however, the Campbell (1985) model provided the best replication of measured values for all soils. This research therefore suggests that the use of the Campbell model in the absence of accurately measured Ds and porosity values for a study soil would be appropriate. Future research into methods to reduce shrinkage of peat during measurement and therefore allow measurement of Ds for a greater range of ϵ would be beneficial.
Resumo:
Scintillometry is an established technique for determining large areal average sensible heat fluxes. The scintillometer measurement is related to sensible heat flux via Monin–Obukhov similarity theory, which was developed for ideal homogeneous land surfaces. In this study it is shown that judicious application of scintillometry over heterogeneous mixed agriculture on undulating topography yields valid results when compared to eddy covariance (EC). A large aperture scintillometer (LAS) over a 2.4 km path was compared with four EC stations measuring sensible (H) and latent (LvE) heat fluxes over different vegetation (cereals and grass) which when aggregated were representative of the LAS source area. The partitioning of available energy into H and LvE varied strongly for different vegetation types, with H varying by a factor of three between senesced winter wheat and grass pasture. The LAS derived H agrees (one-to-one within the experimental uncertainty) with H aggregated from EC with a high coefficient of determination of 0.94. Chronological analysis shows individual fields may have a varying contribution to the areal average sensible heat flux on short (weekly) time scales due to phenological development and changing soil moisture conditions. Using spatially aggregated measurements of net radiation and soil heat flux with H from the LAS, the areal averaged latent heat flux (LvELAS) was calculated as the residual of the surface energy balance. The regression of LvELAS against aggregated LvE from the EC stations has a slope of 0.94, close to ideal, and demonstrates that this is an accurate method for the landscape-scale estimation of evaporation over heterogeneous complex topography.
Resumo:
IPLV overall coefficient, presented by Air-Conditioning and Refrigeration Institute (ARI) of America, shows running/operation status of air-conditioning system host only. For overall operation coefficient, logical solution has not been developed, to reflect the whole air-conditioning system under part load. In this research undertaking, the running time proportions of air-conditioning systems under part load have been obtained through analysis on energy consumption data during practical operation in all public buildings in Chongqing. This was achieved by using analysis methods, based on the statistical energy consumption data distribution of public buildings month-by-month. Comparing with the weight number of IPLV, part load operation coefficient of air-conditioning system, based on this research, does not only show the status of system refrigerating host, but also reflects and calculate energy efficiency of the whole air-conditioning system. The coefficient results from the processing and analyzing of practical running data, shows the practical running status of area and building type (actual and objective) – not clear. The method is different from model analysis which gets IPLV weight number, in the sense that this method of coefficient results in both four equal proportions and also part load operation coefficient of air-conditioning system under any load rate as necessary.
Resumo:
The aims of this study were to (i) compare the inhibitory effects of the natural microflora of different foods on the growth of Listeria monocytogenes during enrichment in selective and non-selective broths; (ii) to isolate and identify components of the microflora of the most inhibitory food; and (iii) to determine which of these components was most inhibitory to growth of L. monocytogenes in co-culture studies. Growth of an antibioticresistant marker strain of L. monocytogenes was examined during enrichment of a range of different foods in Tryptone Soya Broth (TSB), Half Fraser Broth (HFB) and Oxoid Novel Enrichment (ONE) Broth. Inhibition of L. monocytogenes was greatest in the presence of minced beef, salami and soft cheese and least with prepared fresh salad and chicken pâté. For any particular food the numbers of L. monocytogenes present after 24 h enrichment in different broths increased in the order: TSB, HFB and ONE Broth. Numbers of L. monocytogenes recovered after enrichment in TSB were inversely related to the initial aerobic plate count (APC) in the food but with only a moderate coefficient of determination (R2) of 0.51 implying that microbial numbers and the composition of the microflora both influenced the degree of inhibition of L. monocytogenes. In HFB and ONE Broth the relationship between APC and final L. monocytogenes counts was weaker. The microflora of TSB after 24 h enrichment of minced beef consisted of lactic acid bacteria, Brochothrix thermosphacta, Pseudomonas spp., Enterobacteriaceae, and enterococci. In co-culture studies of L. monocytogenes with different components of the microflora in TSB, the lactic acid bacteria were the most inhibitory followed by the Enterobacteriaceae. The least inhibitory organisms were Pseudomonas sp., enterococci and B. thermosphacta. In HFB and ONE Broth the growth of Gram-negative organisms was inhibited but lactic acid bacteria still reached high numbers after 24 h. A more detailed study of the growth of low numbers of L. monocytogenes during enrichment of minced beef in TSB revealed that growth of L. monocytogenes ceased at a cell concentration of about 102 cfu/ml when lactic acid bacteria entered stationary phase. However in ONE Broth growth of lactic acid bacteria was slower than in TSB with a longer lag time allowing L. monocytogenes to achieve much higher numbers before lactic acid bacteria reached stationary phase. This work has identified the relative inhibitory effects of different components of a natural food microflora and shown that the ability of low numbers of L. monocytogenes to achieve high cell concentrations is highly dependent on the extent to which enrichment media are able to inhibit or delay growth of the more effective competitors.
Effect of milk fat concentration and gel firmness on syneresis during curd stirring in cheese-making
Resumo:
An experiment was undertaken to investigate the effect of milk fat level (0%, 2.5% and 5.0% w/w) and gel firmness level at cutting (5, 35 and 65 Pa) on indices of syneresis, while curd was undergoing stirring. The curd moisture content, yield of whey, fat in whey and casein fines in whey were measured at fixed intervals between 5 and 75 min after cutting the gel. The casein level in milk and clotting conditions was kept constant in all trials. The trials were carried out using recombined whole milk in an 11 L cheese vat. The fat level in milk had a large negative effect on the yield of whey. A clear effect of gel firmness on casein fines was observed. The best overall prediction, in terms of coefficient of determination, was for curd moisture content using milk fat concentration, time after gel cutting and set-to-cut time (R2 = 0.95).
Resumo:
Current feed evaluation systems for ruminants are too imprecise to describe diets in terms of their acidosis risk. The dynamic mechanistic model described herein arises from the integration of a lactic acid (La) metabolism module into an extant model of whole-rumen function. The model was evaluated using published data from cows and sheep fed a range of diets or infused with various doses of La. The model performed well in simulating peak rumen La concentrations (coefficient of determination = 0.96; root mean square prediction error = 16.96% of observed mean), although frequency of sampling for the published data prevented a comprehensive comparison of prediction of time to peak La accumulation. The model showed a tendency for increased La accumulation following feeding of diets rich in nonstructural carbohydrates, although less-soluble starch sources such as corn tended to limit rumen La concentration. Simulated La absorption from the rumen remained low throughout the feeding cycle. The competition between bacteria and protozoa for rumen La suggests a variable contribution of protozoa to total La utilization. However, the model was unable to simulate the effects of defaunation on rumen La metabolism, indicating a need for a more detailed description of protozoal metabolism. The model could form the basis of a feed evaluation system with regard to rumen La metabolism.
Resumo:
We utilized an ecosystem process model (SIPNET, simplified photosynthesis and evapotranspiration model) to estimate carbon fluxes of gross primary productivity and total ecosystem respiration of a high-elevation coniferous forest. The data assimilation routine incorporated aggregated twice-daily measurements of the net ecosystem exchange of CO2 (NEE) and satellite-based reflectance measurements of the fraction of absorbed photosynthetically active radiation (fAPAR) on an eight-day timescale. From these data we conducted a data assimilation experiment with fifteen different combinations of available data using twice-daily NEE, aggregated annual NEE, eight-day f AP AR, and average annual fAPAR. Model parameters were conditioned on three years of NEE and fAPAR data and results were evaluated to determine the information content from the different combinations of data streams. Across the data assimilation experiments conducted, model selection metrics such as the Bayesian Information Criterion and Deviance Information Criterion obtained minimum values when assimilating average annual fAPAR and twice-daily NEE data. Application of wavelet coherence analyses showed higher correlations between measured and modeled fAPAR on longer timescales ranging from 9 to 12 months. There were strong correlations between measured and modeled NEE (R2, coefficient of determination, 0.86), but correlations between measured and modeled eight-day fAPAR were quite poor (R2 = −0.94). We conclude that this inability to determine fAPAR on eight-day timescale would improve with the considerations of the radiative transfer through the plant canopy. Modeled fluxes when assimilating average annual fAPAR and annual NEE were comparable to corresponding results when assimilating twice-daily NEE, albeit at a greater uncertainty. Our results support the conclusion that for this coniferous forest twice-daily NEE data are a critical measurement stream for the data assimilation. The results from this modeling exercise indicate that for this coniferous forest, average annuals for satellite-based fAPAR measurements paired with annual NEE estimates may provide spatial detail to components of ecosystem carbon fluxes in proximity of eddy covariance towers. Inclusion of other independent data streams in the assimilation will also reduce uncertainty on modeled values.
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
The precision of quasioptical null-balanced bridge instruments for transmission and reflection coefficient measurements at millimeter and submillimeter wavelengths is analyzed. A Jones matrix analysis is used to describe the amount of power reaching the detector as a function of grid angle orientation, sample transmittance/reflectance and phase delay. An analysis is performed of the errors involved in determining the complex transmission and reflection coefficient after taking into account the quantization error in the grid angle and micrometer readings, the transmission or reflection coefficient of the sample, the noise equivalent power of the detector, the source power and the post-detection bandwidth. For a system fitted with a rotating grid with resolution of 0.017 rad and a micrometer quantization error of 1 μm, a 1 mW source, and a detector with a noise equivalent power 5×10−9 W Hz−1/2, the maximum errors at an amplitude transmission or reflection coefficient of 0.5 are below ±0.025.
Resumo:
We discuss some novel technologies that enable the implementation of shearing interferometry at the terahertz part of the spectrum. Possible applications include the direct measurement of lens parameters, the measurement of refractive index of materials that are transparent to terahertz frequencies, determination of homogeneity of samples, measurement of optical distortions and the non-contact evaluation of thermal expansion coefficient of materials buried inside media that are opaque to optical or infrared frequencies but transparent to THz frequencies. The introduction of a shear to a Gaussian free-space propagating terahertz beam in a controlled manner also makes possible a range of new encoding and optical signal processing modalities.
Resumo:
A number of recent experiments suggest that, at a given wetting speed, the dynamic contact angle formed by an advancing liquid-gas interface with a solid substrate depends on the flow field and geometry near the moving contact line. In the present work, this effect is investigated in the framework of an earlier developed theory that was based on the fact that dynamic wetting is, by its very name, a process of formation of a new liquid-solid interface (newly “wetted” solid surface) and hence should be considered not as a singular problem but as a particular case from a general class of flows with forming or/and disappearing interfaces. The results demonstrate that, in the flow configuration of curtain coating, where a liquid sheet (“curtain”) impinges onto a moving solid substrate, the actual dynamic contact angle indeed depends not only on the wetting speed and material constants of the contacting media, as in the so-called slip models, but also on the inlet velocity of the curtain, its height, and the angle between the falling curtain and the solid surface. In other words, for the same wetting speed the dynamic contact angle can be varied by manipulating the flow field and geometry near the moving contact line. The obtained results have important experimental implications: given that the dynamic contact angle is determined by the values of the surface tensions at the contact line and hence depends on the distributions of the surface parameters along the interfaces, which can be influenced by the flow field, one can use the overall flow conditions and the contact angle as a macroscopic multiparametric signal-response pair that probes the dynamics of the liquid-solid interface. This approach would allow one to investigate experimentally such properties of the interface as, for example, its equation of state and the rheological properties involved in the interface’s response to an external torque, and would help to measure its parameters, such as the coefficient of sliding friction, the surface-tension relaxation time, and so on.
Resumo:
Despite its relevance to a wide range of technological and fundamental areas, a quantitative understanding of protein surface clustering dynamics is often lacking. In inorganic crystal growth, surface clustering of adatoms is well described by diffusion-aggregation models. In such models, the statistical properties of the aggregate arrays often reveal the molecular scale aggregation processes. We investigate the potential of these theories to reveal hitherto hidden facets of protein clustering by carrying out concomitant observations of lysozyme adsorption onto mica surfaces, using atomic force microscopy. and Monte Carlo simulations of cluster nucleation and growth. We find that lysozyme clusters diffuse across the substrate at a rate that varies inversely with size. This result suggests which molecular scale mechanisms are responsible for the mobility of the proteins on the substrate. In addition the surface diffusion coefficient of the monomer can also be extracted from the comparison between experiments and simulations. While concentrating on a model system of lysozyme-on-mica, this 'proof of concept' study successfully demonstrates the potential of our approach to understand and influence more biomedically applicable protein-substrate couples.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. The first paper of this series examined the effects of the former on the variogram and this paper examines the effects of asymmetry arising from outliers. Simulated annealing was used to create normally distributed random fields of different size that are realizations of known processes described by variograms with different nugget:sill ratios. These primary data sets were then contaminated with randomly located and spatially aggregated outliers from a secondary process to produce different degrees of asymmetry. Experimental variograms were computed from these data by Matheron's estimator and by three robust estimators. The effects of standard data transformations on the coefficient of skewness and on the variogram were also investigated. Cross-validation was used to assess the performance of models fitted to experimental variograms computed from a range of data contaminated by outliers for kriging. The results showed that where skewness was caused by outliers the variograms retained their general shape, but showed an increase in the nugget and sill variances and nugget:sill ratios. This effect was only slightly more for the smallest data set than for the two larger data sets and there was little difference between the results for the latter. Overall, the effect of size of data set was small for all analyses. The nugget:sill ratio showed a consistent decrease after transformation to both square roots and logarithms; the decrease was generally larger for the latter, however. Aggregated outliers had different effects on the variogram shape from those that were randomly located, and this also depended on whether they were aggregated near to the edge or the centre of the field. The results of cross-validation showed that the robust estimators and the removal of outliers were the most effective ways of dealing with outliers for variogram estimation and kriging. (C) 2007 Elsevier Ltd. All rights reserved.