973 resultados para Empirical Comparison


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rheological properties of dough and gluten are important for end-use quality of flour but there is a lack of knowledge of the relationships between fundamental and empirical tests and how they relate to flour composition and gluten quality. Dough and gluten from six breadmaking wheat qualities were subjected to a range of rheological tests. Fundamental (small-deformation) rheological characterizations (dynamic oscillatory shear and creep recovery) were performed on gluten to avoid the nonlinear influence of the starch component, whereas large deformation tests were conducted on both dough and gluten. A number of variables from the various curves were considered and subjected to a principal component analysis (PCA) to get an overview of relationships between the various variables. The first component represented variability in protein quality, associated with elasticity and tenacity in large deformation (large positive loadings for resistance to extension and initial slope of dough and gluten extension curves recorded by the SMS/Kieffer dough and gluten extensibility rig, and the tenacity and strain hardening index of dough measured by the Dobraszczyk/Roberts dough inflation system), the elastic character of the hydrated gluten proteins (large positive loading for elastic modulus [G'], large negative loadings for tan delta and steady state compliance [J(e)(0)]), the presence of high molecular weight glutenin subunits (HMW-GS) 5+10 vs. 2+12, and a size distribution of glutenin polymers shifted toward the high-end range. The second principal component was associated with flour protein content. Certain rheological data were influenced by protein content in addition to protein quality (area under dough extension curves and dough inflation curves [W]). The approach made it possible to bridge the gap between fundamental rheological properties, empirical measurements of physical properties, protein composition, and size distribution. The interpretation of this study gave indications of the molecular basis for differences in breadmaking performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimentally and theoretically determined infrared spectra are reported for a series of straight-chain perfluorocarbons: C2F6, C3F8, C4F10, C5F12, C6F14, and C8F18. Theoretical spectra were determined using both density functional (DFT) and ab initio methods. Radiative efficiencies (REs) were determined using the method of Pinnock et al. (1995) and combined with atmospheric lifetimes from the literature to determine global warming potentials (GWPs). Theoretically determined absorption cross sections were within 10% of experimentally determined values. Despite being much less computationally expensive, DFT calculations were generally found to perform better than ab initio methods. There is a strong wavenumber dependence of radiative forcing in the region of the fundamental C-F vibration, and small differences in wavelength between band positions determined by theory and experiment have a significant impact on the REs. We apply an empirical correction to the theoretical spectra and then test this correction on a number of branched chain and cyclic perfluoroalkanes. We then compute absorption cross sections, REs, and GWPs for an additional set of perfluoroalkenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of new technologies, such as virtual reality (VR), is important to corporations, yet understanding of their successful implementation is insuf. ciently developed. In this paper a case study is used to analyse the introduction of VR use in a British housebuilding company. Although the implementation was not successful in the manner initially anticipated, the study provides insight into the process of change, the constraints that inhibit implementation and the relationship between new technology and work organization. Comparison is made with the early use of CAD and similarities and differences between empirical . ndings of the case study and the previous literature are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The A-Train constellation of satellites provides a new capability to measure vertical cloud profiles that leads to more detailed information on ice-cloud microphysical properties than has been possible up to now. A variational radar–lidar ice-cloud retrieval algorithm (VarCloud) takes advantage of the complementary nature of the CloudSat radar and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar to provide a seamless retrieval of ice water content, effective radius, and extinction coefficient from the thinnest cirrus (seen only by the lidar) to the thickest ice cloud (penetrated only by the radar). In this paper, several versions of the VarCloud retrieval are compared with the CloudSat standard ice-only retrieval of ice water content, two empirical formulas that derive ice water content from radar reflectivity and temperature, and retrievals of vertically integrated properties from the Moderate Resolution Imaging Spectroradiometer (MODIS) radiometer. The retrieved variables typically agree to within a factor of 2, on average, and most of the differences can be explained by the different microphysical assumptions. For example, the ice water content comparison illustrates the sensitivity of the retrievals to assumed ice particle shape. If ice particles are modeled as oblate spheroids rather than spheres for radar scattering then the retrieved ice water content is reduced by on average 50% in clouds with a reflectivity factor larger than 0 dBZ. VarCloud retrieves optical depths that are on average a factor-of-2 lower than those from MODIS, which can be explained by the different assumptions on particle mass and area; if VarCloud mimics the MODIS assumptions then better agreement is found in effective radius and optical depth is overestimated. MODIS predicts the mean vertically integrated ice water content to be around a factor-of-3 lower than that from VarCloud for the same retrievals, however, because the MODIS algorithm assumes that its retrieved effective radius (which is mostly representative of cloud top) is constant throughout the depth of the cloud. These comparisons highlight the need to refine microphysical assumptions in all retrieval algorithms and also for future studies to compare not only the mean values but also the full probability density function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated the accuracy of six watershed models of nitrogen export in streams (kg km2 yr−1) developed for use in large watersheds and representing various empirical and quasi-empirical approaches described in the literature. These models differ in their methods of calibration and have varying levels of spatial resolution and process complexity, which potentially affect the accuracy (bias and precision) of the model predictions of nitrogen export and source contributions to export. Using stream monitoring data and detailed estimates of the natural and cultural sources of nitrogen for 16 watersheds in the northeastern United States (drainage sizes = 475 to 70,000 km2), we assessed the accuracy of the model predictions of total nitrogen and nitrate-nitrogen export. The model validation included the use of an error modeling technique to identify biases caused by model deficiencies in quantifying nitrogen sources and biogeochemical processes affecting the transport of nitrogen in watersheds. Most models predicted stream nitrogen export to within 50% of the measured export in a majority of the watersheds. Prediction errors were negatively correlated with cultivated land area, indicating that the watershed models tended to over predict export in less agricultural and more forested watersheds and under predict in more agricultural basins. The magnitude of these biases differed appreciably among the models. Those models having more detailed descriptions of nitrogen sources, land and water attenuation of nitrogen, and water flow paths were found to have considerably lower bias and higher precision in their predictions of nitrogen export.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical mode decomposition (EMD) is a data-driven method used to decompose data into oscillatory components. This paper examines to what extent the defined algorithm for EMD might be susceptible to data format. Two key issues with EMD are its stability and computational speed. This paper shows that for a given signal there is no significant difference between results obtained with single (binary32) and double (binary64) floating points precision. This implies that there is no benefit in increasing floating point precision when performing EMD on devices optimised for single floating point format, such as graphical processing units (GPUs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermo-solvatochrornic behaviors of 2,6-diphenyl-4-(2,4,6-triphenylpyridinium-1-yl) phenolate, RB; 2,6-dichloro-4-(2,4,6-triphenyloyridinium-1-yl) phenolate, WB; 2,6-dibromo-4-[(E)-2-(1-methylpyridinium-4-yl)ethenyl] phenolate, MePMBr(2); 2,6-dibromo-4-[(E)-2-(1-n-octylpyridinium-4-yl)ethenyl] phenolate, OcPMBr(2), have been investigated in binary mixtures of the ionic liquid, IL, 1-(1-butyl)-3-methylimidazolium tetrafluorborate, [BuMeIm][BF(4)], and water (W), in the temperature range from 10 to 60 degrees C. Plots of the empirical solvent polarities, ET (probe) in kcal mol(-1), versus the mole fraction of water in the binary mixture, chi(w) showed nonlinear, i.e., nonideal behavior. Solvation by these IL-W mixtures shows the following similarities to that by aqueous aliphatic alcohols: The same solvation model can be conveniently employed to treat the data obtained; it is based on the presence in the system-bulk medium and probe solvation shell of IL, W, and the ""complex"" solvent 1:1 IL-W. The origin of the nonideal solvation behavior appears to be the same, preferential solvation of the probe, in particular by the complex solvent. The strength of association of the IL-W complex, and the polarity of the IL are situated between the corresponding values of aqueous methanol and aqueous ethanol. Temperature increase causes a gradual desolvation of all probes employed. A difference between solvation by IL-W and aqueous alcohols is that probe-solvent hydrophobic interactions appear to play a minor role in case of the former mixture, probably because solvation is dominated by hydrogen-bonding and Coulombic interactions between the ions of the IL and the zwitterionic probes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ionic liquids, ILs, carrying long-chain alkyl groups are surface active, SAIIs. We investigated the micellar properties of the SAIL 1-hexadecyl-3-methylimidazolium chloride, C(16)MeImCl, and compared the data with 1-hexadecylpyridinium chloride, C(16)PYCl, and benzyl (3-hexadecanoylaminoethyl)dimethylammonium chloride, C(15)AEtBzMe(2)Cl. The properties compared include critical micelle concentration, cmc; thermodynamic parameters of micellization; empirical polarity and water concentrations in the interfacial regions. In the temperature range from 15 to 75 degrees C, the order of cmc in H(2)O and in D(2)O is C(16)PYCl > C(16)MeImCl > C(15)AEtBzMe(2)Cl. The enthalpies of micellization, Delta H(mic)(degrees), were calculated indirectly from by use of the van`t Hoff treatment; directly by isothermal titration calorimetry, ITC. Calculation of the degree of counter-ion dissociation, alpha(mic), from conductivity measurements, by use of Evans equation requires knowledge of the aggregation numbers, N(agg), at different temperatures. We have introduced a reliable method for carrying out this calculation, based on the volume and length of the monomer, and the dependence of N(agg) on temperature. The N(agg) calculated for C(16)PyCl and C(16)MeImCl were corroborated by light scattering measurements. Conductivity- and ITC-based Delta H(mic)(degrees) do not agree; reasons for this discrepancy are discussed. Micelle formation is entropy driven: at all studied temperatures for C(16)MeImCl; only up to 65 degrees C for C(16)PyCl; and up to 55 degrees C for C(15)AEtBzMe(2)Cl. All these data can be rationalized by considering hydrogen-bonding between the head-ions of the monomers in the micellar aggregate. The empirical polarities and concentrations of interfacial water were found to be independent of the nature of the head-group. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common problem when planning large free field PV-plants is optimizing the ground occupation ratio while maintaining low shading losses. Due to the complexity of this task, several PV-plants have been built using various configurations. In order to compare the shading losses of different PV technologies and array designs, empirical performance data of five free field PV-plants operating in Germany was analyzed. The data collected comprised 140 winter days from October 2011 until March 2012. The relative shading losses were estimated by comparing the energy output of selected arrays in the front rows (shading-free) against that of shaded arrays in the back rows of the same plant. The results showed that landscape mounting with mc-Si PV-modules yielded significantly better results than portrait one. With CIGS modules, making cross-table strings using the lower modules was not beneficial as expected and had more losses than a one-string-per-table layout. Parallel substrings with CdTe showed relatively low losses. Among the two CdTe products analyzed, none showed a significantly better performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the findings from an empirical study examining the relationship between total quality management (TQM) practices and quality performance in Australian organizations. A comparison is made between organizations that have adopted formal TQM programs and organizations without a formal program in place. It was recognized that the lack of a formal program did not necessarily mean TQM principles were not being practiced. The findings show that the firms adopting formal TQM programs implement several TQM practices at a higher level than those that do not have TQM programs. This difference, however, is not apparent in the case of quality performance. Furthermore, the findings show the strong links between TQM practices and quality performance, and there is no significant difference between organizations implementing formal TQM programs and those organizations simply adopting TQM practices. This suggests that it is the adoption of quality practices that matters rather than formal programs per se.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some extant theory and empirical research suggests that youth problem behaviors, such as substance abuse and delinquency, reflect a single underlying dimension of behavior, whereas others suggest there are several different dimensions. Few studies have examined potential international differences in the structure of problem behavior, where cultural and policy differences may create more variation in behavior and different structures. This study explored the structure of problem behavior in two representative samples of youth (ages 12-17) from Maine and Oregon in the United States (N = 33,066) and Victoria, Australia (N = 8,500). The authors examined the degree to which data from the two countries produce similar model structures using indicators of problem behavior. Results show that the data are best represented by two factors, substance use and delinquency, and there appear to be more similarities than differences in the models across countries. Implications for understanding problem behavior across cultural and developmental groups and practical and policy implications are discussed.