910 resultados para indirect and composite estimators
Resumo:
The current study aims to assess the applicability of direct or indirect normalization for the analysis of fractional anisotropy (FA) maps in the context of diffusion-weighted images (DWIs) contaminated by ghosting artifacts. We found that FA maps acquired by direct normalization showed generally higher anisotropy than indirect normalization, and the disparities were aggravated by the presence of ghosting artifacts in DWIs. The voxel-wise statistical comparisons demonstrated that indirect normalization reduced the influence of artifacts and enhanced the sensitivity of detecting anisotropy differences between groups. This suggested that images contaminated with ghosting artifacts can be sensibly analyzed using indirect normalization.
Resumo:
BACKGROUND: Enriching poultry meat with long-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) can increase low population intakes of LC n-3 PUFA, but fishy taints can spoil reheated meat. This experiment determined the effect of different amounts of LC n-3 PUFA and vitamin E in the broiler diet on the fatty acid composition and sensory characteristics of the breast meat. Ross 308 broilers (120) were randomly allocated to one of five treatments from 21 to 42 days of age. Diets contained (g kg−1) 0, 9 or 18 LC n-3 PUFA (0LC, 9LC, 18LC), and 100, 150 or 200 mg LD--tocopherol acetate kg−1 (E). The five diets were 0LC100E, 9LC100E, 18LC100E, 18LC150E, 18LC200E, with four pens per diet, except 18LC100E (eight pens). Breast meat was analysed for fatty acids (uncooked) and sensory analysis by R-index (reheated). RESULTS: LC n-3 PUFA content (mg kg−1 meat) was 514 (0LC100E) and 2236 (9LC and 18LC). Compared with 0LC100E, meat from 18LC100E and 18LC150E tasted significantly different, while 23% of panellists detected fishy taints in 9LC100E and 18LC200E. CONCLUSION: Chicken meat can be enriched with nutritionally meaningful amounts of LC n-3 PUFA, but > 100 mg dl--tocopherol acetate kg−1 broiler diet is needed to protect reheated meat from oxidative deterioration. Copyright © 2010 Society of Chemical Industry
Resumo:
Methane is the second most important anthropogenic greenhouse gas in the atmosphere next to carbon dioxide. Its global warming potential (GWP) for a time horizon of 100 years is 25, which makes it an attractive target for climate mitigation policies. Although the methane GWP traditionally includes the methane indirect effects on the concentrations of ozone and stratospheric water vapour, it does not take into account the production of carbon dioxide from methane oxidation. We argue here that this CO2-induced effect should be included for fossil sources of methane, which results in slightly larger GWP values for all time horizons. If the global temperature change potential is used as an alternative climate metric, then the impact of the CO2-induced effect is proportionally much larger. We also discuss what the correction term should be for methane from anthropogenic biogenic sources.
Resumo:
This paper approaches the subject of brand equity measurement on and offline. The existing body of research knowledge on brand equity measurement has derived from classical contexts; however, the majority of today's brands prosper simultaneously online and offline. Since branding on the Web needs to address the unique characteristics of computer-mediated environments, it was posited that classical measures of brand equity were inadequate for this category of brands. Aaker's guidelines for building a brand equity measurement system were thus followed and his brand equity ten was employed as a point of departure. The main challenge was complementing traditional measures of brand equity with new measures pertinent to the Web. Following 16 semi-structured interviews with experts, ten additional measures were identified.
Resumo:
Unless a direct hedge is available, cross hedging must be used. In such circumstances portfolio theory implies that a composite hedge (the use of two or more hedging instruments to hedge a single spot position) will be beneficial. The study and use of composite hedging has been neglected; possibly because it requires the estimation of two or more hedge ratios. This paper demonstrates a statistically significant increase in out-of-sample effectiveness from the composite hedging of the Amex Oil Index using S&P500 and New York Mercantile Exchange crude oil futures. This conclusion is robust to the technique used to estimate the hedge ratios, and to allowance for transactions costs, dividends and the maturity of the futures contracts.
Resumo:
This study investigated whether children’s fears could be un-learned using Rachman’s indirect pathways for learning fear. We hypothesised that positive information and modelling a non-anxious response are effective methods of un-learning fears acquired through verbal information. One hundred and seven children aged 6–8 years received negative information about one animal and no information about another. Fear beliefs and behavioural avoidance were measured. Children were randomised to receive positive verbal information, modelling, or a control task. Fear beliefs and behavioural avoidance were measured again. Positive information and modelling led to lower fear beliefs and behavioural avoidance than the control condition. Positive information was more effective than modelling in reducing fear beliefs and both methods significantly reduced behavioural avoidance. The results support Rachman’s indirect pathways as viable fear un-learning pathways and supports associative learning theories.
Resumo:
This paper provides a comparative study of the performance of cross-flow and counter-flow M-cycle heat exchangers for dew point cooling. It is recognised that evaporative cooling systems offer a low energy alternative to conventional air conditioning units. Recently emerged dew point cooling, as the renovated evaporative cooling configuration, is claimed to have much higher cooling output over the conventional evaporative modes owing to use of the M-cycle heat exchangers. Cross-flow and counter-flow heat exchangers, as the available structures for M-cycle dew point cooling processing, were theoretically and experimentally investigated to identify the difference in cooling effectiveness of both under the parallel structural/operational conditions, optimise the geometrical sizes of the exchangers and suggest their favourite operational conditions. Through development of a dedicated computer model and case-by-case experimental testing and validation, a parametric study of the cooling performance of the counter-flow and cross-flow heat exchangers was carried out. The results showed the counter-flow exchanger offered greater (around 20% higher) cooling capacity, as well as greater (15%–23% higher) dew-point and wet-bulb effectiveness when equal in physical size and under the same operating conditions. The cross-flow system, however, had a greater (10% higher) Energy Efficiency (COP). As the increased cooling effectiveness will lead to reduced air volume flow rate, smaller system size and lower cost, whilst the size and cost are the inherent barriers for use of dew point cooling as the alternation of the conventional cooling systems, the counter-flow system is considered to offer practical advantages over the cross-flow system that would aid the uptake of this low energy cooling alternative. In line with increased global demand for energy in cooling of building, largely by economic booming of emerging developing nations and recognised global warming, the research results will be of significant importance in terms of promoting deployment of the low energy dew point cooling system, helping reduction of energy use in cooling of buildings and cut of the associated carbon emission.
Resumo:
The main uncertainty in anthropogenic forcing of the Earth’s climate stems from pollution aerosols, particularly their ‘‘indirect effect’’ whereby aerosols modify cloud properties. We develop a new methodology to derive a measurement-based estimate using almost exclusively information from an Earth radiation budget instrument (CERES) and a radiometer (MODIS). We derive a statistical relationship between planetary albedo and cloud properties, and, further, between the cloud properties and column aerosol concentration. Combining these relationships with a data set of satellite-derived anthropogenic aerosol fraction, we estimate an anthropogenic radiative forcing of �-0.9 ± 0.4 Wm�-2 for the aerosol direct effect and of �-0.2 ± 0.1 Wm�-2 for the cloud albedo effect. Because of uncertainties in both satellite data and the method, the uncertainty of this result is likely larger than the values given here which correspond only to the quantifiable error estimates. The results nevertheless indicate that current global climate models may overestimate the cloud albedo effect.
Resumo:
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd ) compares relatively well to the satellite data at least over the ocean. The relationship between �a and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and �a as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld–�a relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between �a and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - �a relationship show a strong positive correlation between �a and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of �a, and parameterisation assumptions such as a lower bound on Nd . Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of −1.5±0.5Wm−2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clearand cloudy-sky forcings with estimates of anthropogenic �a and satellite-retrieved Nd–�a regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of −0.4±0.2Wm−2 and a cloudy-sky (aerosol indirect effect) estimate of −0.7±0.5Wm−2, with a total estimate of −1.2±0.4Wm−2.
Resumo:
Natural aerosol plays a significant role in the Earth’s system due to its ability to alter the radiative balance of the Earth. Here we use a global aerosol microphysics model together with a radiative transfer model to estimate radiative effects for five natural aerosol sources in the present-day atmosphere: dimethyl sulfide (DMS), sea-salt, volcanoes, monoterpenes, and wildfires. We calculate large annual global mean aerosol direct and cloud albedo effects especially for DMS-derived sulfate (–0.23 Wm–2 and –0.76 Wm–2, respectively), volcanic sulfate (–0.21 Wm–2 and –0.61 Wm–2) and sea-salt (–0.44 Wm–2 and –0.04 Wm–2). The cloud albedo effect responds nonlinearly to changes in emission source strengths. The natural sources have both markedly different radiative efficiencies and indirect/direct radiative effect ratios. Aerosol sources that contribute a large number of small particles (DMS-derived and volcanic sulfate) are highly effective at influencing cloud albedo per unit of aerosol mass burden.
Resumo:
We study individual decision making in a lottery-choice task performed by three different populations: gamblers under psychological treatment ("addicts"), gamblers’ spouses ("victims"), and people who are neither gamblers or gamblers’ spouses ("normals"). We find that addicts are willing to take less risk than normals, but the difference is smaller as a gambler’s time under treatment increases. The large majority of victims report themselves unwilling to take any risk at all. However, addicts in the first year of treatment react more than other addicts to the different values of the risk-return parameter.
Resumo:
We present a new composite of geomagnetic activity which is designed to be as homogeneous in its construction as possible. This is done by only combining data that, by virtue of the locations of the source observatories used, have similar responses to solar wind and IMF (interplanetary magnetic field) variations. This will enable us (in Part 2, Lockwood et al., 2013a) to use the new index to reconstruct the interplanetary magnetic field, B, back to 1846 with a full analysis of errors. Allowance is made for the effects of secular change in the geomagnetic field. The composite uses interdiurnal variation data from Helsinki for 1845–1890 (inclusive) and 1893–1896 and from Eskdalemuir from 1911 to the present. The gaps are filled using data from the Potsdam (1891–1892 and 1897–1907) and the nearby Seddin observatories (1908–1910) and intercalibration achieved using the Potsdam–Seddin sequence. The new index is termed IDV(1d) because it employs many of the principles of the IDV index derived by Svalgaard and Cliver (2010), inspired by the u index of Bartels (1932); however, we revert to using one-day (1d) means, as employed by Bartels, because the use of near-midnight values in IDV introduces contamination by the substorm current wedge auroral electrojet, giving noise and a dependence on solar wind speed that varies with latitude. The composite is compared with independent, early data from European-sector stations, Greenwich, St Petersburg, Parc St Maur, and Ekaterinburg, as well as the composite u index, compiled from 2–6 stations by Bartels, and the IDV index of Svalgaard and Cliver. Agreement is found to be extremely good in all cases, except two. Firstly, the Greenwich data are shown to have gradually degraded in quality until new instrumentation was installed in 1915. Secondly, we infer that the Bartels u index is increasingly unreliable before about 1886 and overestimates the solar cycle amplitude between 1872 and 1883 and this is amplified in the proxy data used before 1872. This is therefore also true of the IDV index which makes direct use of the u index values.
Resumo:
This study investigated the effects of increased genetic diversity in winter wheat (Triticum aestivum L.), either from hybridization across genotypes or from physical mixing of lines, on grain yield, grain quality, and yield stability in different cropping environments. Sets of pure lines (no diversity), chosen for high yielding ability or high quality, were compared with line mixtures (intermediate level of diversity), and lines crossed with each other in composite cross populations (CCPn, high diversity). Additional populations containing male sterility genes (CCPms) to increase outcrossing rates were also tested. Grain yield, grain protein content, and protein yield were measured at four sites (two organically-managed and two conventionally-managed) over three years, using seed harvested locally in each preceding year. CCPn and mixtures out-yielded the mean of the parents by 2.4% and 3.6%, respectively. These yield differences were consistent across genetic backgrounds but partly inconsistent across cropping environments and years. Yield stability measured by environmental variance was higher in CCPn and CCPms than the mean of the parents. An index of yield reliability tended to be higher in CCPn, CCPms and mixtures than the mean of the parents. Lin and Binns’ superiority values of yield and protein yield were consistently and significantly lower (i.e. better) in the CCPs than in the mean of the parents, but not different between CCPs and mixtures. However, CCPs showed greater early ground cover and plant height than mixtures. When compared with the (locally non-predictable) best-yielding pure line, CCPs and mixtures exhibited lower mean yield and somewhat lower yield reliability but comparable superiority values. Thus, establishing CCPs from smaller sets of high-performing parent lines might optimize their yielding ability. On the whole, the results demonstrate that using increased within-crop genetic diversity can produce wheat crops with improved yield stability and good yield reliability across variable and unpredictable cropping environments.