858 resultados para panel estimates


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we examine seasonal and geographical variability of marine aerosol fine-mode fraction ( fm) and its impacts on deriving the anthropogenic component of aerosol optical depth (ta) and direct radiative forcing from multispectral satellite measurements. A proxy of fm, empirically derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 data, shows large seasonal and geographical variations that are consistent with the Goddard Chemistry Aerosol Radiation Transport (GOCART) and Global Modeling Initiative (GMI) model simulations. The so-derived seasonally and spatially varying fm is then implemented into a method of estimating ta and direct radiative forcing from the MODIS measurements. It is found that the use of a constant value for fm as in previous studies would have overestimated ta by about 20% over global ocean, with the overestimation up to �45% in some regions and seasons. The 7-year (2001–2007) global ocean average ta is 0.035, with yearly average ranging from 0.031 to 0.039. Future improvement in measurements is needed to better separate anthropogenic aerosol from natural ones and to narrow down the wide range of aerosol direct radiative forcing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The INSIG2 rs7566605 polymorphism was identified for obesity (BMI> or =30 kg/m(2)) in one of the first genome-wide association studies, but replications were inconsistent. We collected statistics from 34 studies (n = 74,345), including general population (GP) studies, population-based studies with subjects selected for conditions related to a better health status ('healthy population', HP), and obesity studies (OB). We tested five hypotheses to explore potential sources of heterogeneity. The meta-analysis of 27 studies on Caucasian adults (n = 66,213) combining the different study designs did not support overall association of the CC-genotype with obesity, yielding an odds ratio (OR) of 1.05 (p-value = 0.27). The I(2) measure of 41% (p-value = 0.015) indicated between-study heterogeneity. Restricting to GP studies resulted in a declined I(2) measure of 11% (p-value = 0.33) and an OR of 1.10 (p-value = 0.015). Regarding the five hypotheses, our data showed (a) some difference between GP and HP studies (p-value = 0.012) and (b) an association in extreme comparisons (BMI> or =32.5, 35.0, 37.5, 40.0 kg/m(2) versus BMI<25 kg/m(2)) yielding ORs of 1.16, 1.18, 1.22, or 1.27 (p-values 0.001 to 0.003), which was also underscored by significantly increased CC-genotype frequencies across BMI categories (10.4% to 12.5%, p-value for trend = 0.0002). We did not find evidence for differential ORs (c) among studies with higher than average obesity prevalence compared to lower, (d) among studies with BMI assessment after the year 2000 compared to those before, or (e) among studies from older populations compared to younger. Analysis of non-Caucasian adults (n = 4889) or children (n = 3243) yielded ORs of 1.01 (p-value = 0.94) or 1.15 (p-value = 0.22), respectively. There was no evidence for overall association of the rs7566605 polymorphism with obesity. Our data suggested an association with extreme degrees of obesity, and consequently heterogeneous effects from different study designs may mask an underlying association when unaccounted for. The importance of study design might be under-recognized in gene discovery and association replication so far.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using panel data for 111 countries over the period 1982–2002, we employ two indexes that cover a wide range of human rights to empirically analyze whether and to what extent terrorism affects human rights. According to our results,terrorism significantly, but not dramatically, diminishes governments’ respect for basic human rights such as the absence of extrajudicial killings, political imprisonment, and torture. The result is robust to how we measure terrorist attacks, to the method of estimation, and to the choice of countries in our sample. However, we find no effect of terrorism on empowerment rights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The direct radiative forcing of 65 chlorofluorocarbons, hydrochlorofluorocarbons, hydrofluorocarbons, hydrofluoroethers, halons, iodoalkanes, chloroalkanes, bromoalkanes, perfluorocarbons and nonmethane hydrocarbons has been evaluated using a consistent set of infrared absorption cross sections. For the radiative transfer models, both line-by-line and random band model approaches were employed for each gas. The line-by-line model was first validated against measurements taken by the Airborne Research Interferometer Evaluation System (ARIES) of the U.K. Meteorological Office; the computed spectrally integrated radiance of agreed to within 2% with experimental measurements. Three model atmospheres, derived from a three-dimensional climatology, were used in the radiative forcing calculations to more accurately represent hemispheric differences in water vapor, ozone concentrations, and cloud cover. Instantaneous, clear-sky radiative forcing values calculated by the line-by-line and band models were in close agreement. The band model values were subsequently modified to ensure exact agreement with the line-by-line model values. Calibrated band model radiative forcing values, for atmospheric profiles with clouds and using stratospheric adjustment, are reported and compared with previous literature values. Fourteen of the 65 molecules have forcings that differ by more than 15% from those in the World Meteorological Organization [1999] compilation. Eleven of the molecules have not been reported previously. The 65-molecule data set reported here is the most comprehensive and consistent database yet available to evaluate the relative impact of halocarbons and hydrocarbons on climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two different TAMSAT (Tropical Applications of Meteorological Satellites) methods of rainfall estimation were developed for northern and southern Africa, based on Meteosat images. These two methods were used to make rainfall estimates for the southern rainy season from October 1995 to April 1996. Estimates produced by both TAMSAT methods and estimates produced by the CPC (Climate Prediction Center) method were then compared with kriged data from over 800 raingauges in southern Africa. This shows that operational TAMSAT estimates are better over plateau regions, with 59% of estimates within one standard error (s.e.) of the kriged rainfall. Over mountainous regions the CPC approach is generally better, although all methods underestimate and give only 40% of estimates within 1 s.e. The two TAMSAT methods show little difference across a whole season, but when looked at in detail the northern method gives unsatisfactory calibrations. The CPC method does have significant overall improvements by building in real-time raingauge data, but only where sufficient raingauges are available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What are the main causes of international terrorism? Despite the meticulous examination of various candidate explanations, existing estimates still diverge in sign, size, and significance. This article puts forward a novel explanation and supporting evidence. We argue that domestic political instability provides the learning environment needed to successfully execute international terror attacks. Using a yearly panel of 123 countries over 1973–2003, we find that the occurrence of civil wars increases fatalities and the number of international terrorist acts by 45%. These results hold for alternative indicators of political instability, estimators, subsamples, subperiods, and accounting for competing explanations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bitter taste elicited by dairy protein hydrolysates (DPH) is a renowned issue for their acceptability by consumers and therefore incorporation into foods. The traditional method of assessment of taste in foods is by sensory analysis but this can be problematic due to the overall unpleasantness of the samples. Thus, there is a growing interest into the use of electronic tongues (e-tongues) as an alternative method to quantify the bitterness in such samples. In the present study the response of the e-tongue to the standard bitter agent caffeine and a range of both casein and whey based hydrolysates was compared to that of a trained sensory panel. Partial least square regression (PLS) was employed to compare the response of the e-tongue and the sensory panel. There was strong correlation shown between the two methods in the analysis of caffeine (R2 of 0.98) and DPH samples with R2 values ranging from 0.94-0.99. This study exhibits potential for the e-tongue to be used in bitterness screening in DPHs to reduce the reliance on expensive and time consuming sensory panels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate data are used in a number of applications including climate risk management and adaptation to climate change. However, the availability of climate data, particularly throughout rural Africa, is very limited. Available weather stations are unevenly distributed and mainly located along main roads in cities and towns. This imposes severe limitations to the availability of climate information and services for the rural community where, arguably, these services are needed most. Weather station data also suffer from gaps in the time series. Satellite proxies, particularly satellite rainfall estimate, have been used as alternatives because of their availability even over remote parts of the world. However, satellite rainfall estimates also suffer from a number of critical shortcomings that include heterogeneous time series, short time period of observation, and poor accuracy particularly at higher temporal and spatial resolutions. An attempt is made here to alleviate these problems by combining station measurements with the complete spatial coverage of satellite rainfall estimates. Rain gauge observations are merged with a locally calibrated version of the TAMSAT satellite rainfall estimates to produce over 30-years (1983-todate) of rainfall estimates over Ethiopia at a spatial resolution of 10 km and a ten-daily time scale. This involves quality control of rain gauge data, generating locally calibrated version of the TAMSAT rainfall estimates, and combining these with rain gauge observations from national station network. The infrared-only satellite rainfall estimates produced using a relatively simple TAMSAT algorithm performed as good as or even better than other satellite rainfall products that use passive microwave inputs and more sophisticated algorithms. There is no substantial difference between the gridded-gauge and combined gauge-satellite products over the test area in Ethiopia having a dense station network; however, the combined product exhibits better quality over parts of the country where stations are sparsely distributed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, due to the Gravity Recovery And Climate Experiment (GRACE) mission and, more recently, the Gravity and steady state Ocean Circulation Explorer (GOCE) mission, our ability to measure the ocean’s mean dynamic topography (MDT) from space has improved dramatically. Here we use GOCE to measure surface current speeds in the North Atlantic and compare our results with a range of independent estimates that use drifter data to improve small scales. We find that, with filtering, GOCE can recover 70% of the Gulf Steam strength relative to the best drifter-based estimates. In the subpolar gyre the boundary currents obtained from GOCE are close to the drifter-based estimates. Crucial to this result is careful filtering which is required to remove small-scale errors, or noise, in the computed surface. We show that our heuristic noise metric, used to determine the degree of filtering, compares well with the quadratic sum of mean sea surface and formal geoid errors obtained from the error variance–covariance matrix associated with the GOCE gravity model. At a resolution of 100 km the North Atlantic mean GOCE MDT error before filtering is 5 cm with almost all of this coming from the GOCE gravity model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wild bird feeding is popular in domestic gardens across the world. Nevertheless, there is surprisingly little empirical information on certain aspects of the activity and no year-round quantitative records of the amounts and nature of the different foods provided in individual gardens. We sought to characterise garden bird feeding in a large UK urban area in two ways. First, we conducted face-to-face questionnaires with a representative cross-section of residents. Just over half fed birds, the majority doing so year round and at least weekly. Second, a two-year study recorded all foodstuffs put out by households on every provisioning occasion. A median of 628 kcal/garden/day was given. Provisioning level was not significantly influenced by weather or season. Comparisons between the data sets revealed significantly less frequent feeding amongst these ‘keen’ feeders than the face-to-face questionnaire respondents, suggesting that one-off questionnaires may overestimate provisioning frequency. Assuming 100% uptake, the median provisioning level equates to sufficient supplementary resources across the UK to support 196 million individuals of a hypothetical average garden-feeding bird species (based on 10 common UK garden-feeding birds’ energy requirements). Taking the lowest provisioning level recorded (101 kcal/day) as a conservative measure, 31 million of these average individuals could theoretically be supported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces Hs(Ω) and tildeHs(Ω), for s in R and an open Ω in R^n. We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The potential risk of agricultural pesticides to mammals typically depends on internal concentrations within individuals, and these are determined by the amount ingested and by absorption, distribution, metabolism, and excretion (ADME). Pesticide residues ingested depend, amongst other things, on individual spatial choices which determine how much and when feeding sites and areas of pesticide application overlap, and can be calculated using individual-based models (IBMs). Internal concentrations can be calculated using toxicokinetic (TK) models, which are quantitative representations of ADME processes. Here we provide a population model for the wood mouse (Apodemus sylvaticus) in which TK submodels were incorporated into an IBM representation of individuals making choices about where to feed. This allows us to estimate the contribution of individual spatial choice and TK processes to risk. We compared the risk predicted by four IBMs: (i) “AllExposed-NonTK”: assuming no spatial choice so all mice have 100% exposure, no TK, (ii) “AllExposed-TK”: identical to (i) except that the TK processes are included where individuals vary because they have different temporal patterns of ingestion in the IBM, (iii) “Spatial-NonTK”: individual spatial choice, no TK, and (iv) “Spatial-TK”: individual spatial choice and with TK. The TK parameters for hypothetical pesticides used in this study were selected such that a conventional risk assessment would fail. Exposures were standardised using risk quotients (RQ; exposure divided by LD50 or LC50). We found that for the exposed sub-population including either spatial choice or TK reduced the RQ by 37–85%, and for the total population the reduction was 37–94%. However spatial choice and TK together had little further effect in reducing RQ. The reasons for this are that when the proportion of time spent in treated crop (PT) approaches 1, TK processes dominate and spatial choice has very little effect, and conversely if PT is small spatial choice dominates and TK makes little contribution to exposure reduction. The latter situation means that a short time spent in the pesticide-treated field mimics exposure from a small gavage dose, but TK only makes a substantial difference when the dose was consumed over a longer period. We concluded that a combined TK-IBM is most likely to bring added value to the risk assessment process when the temporal pattern of feeding, time spent in exposed area and TK parameters are at an intermediate level; for instance wood mice in foliar spray scenarios spending more time in crop fields because of better plant cover.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.