930 resultados para Random effect model
Resumo:
In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.
Resumo:
A global aerosol transport model (Oslo CTM2) with main aerosol components included is compared to five satellite retrievals of aerosol optical depth (AOD) and one data set of the satellite-derived radiative effect of aerosols. The model is driven with meteorological data for the period November 1996 to June 1997 which is the time period investigated in this study. The modelled AOD is within the range of the AOD from the various satellite retrievals over oceanic regions. The direct radiative effect of the aerosols as well as the atmospheric absorption by aerosols are in both cases found to be of the order of 20 Wm−2 in certain regions in both the satellite-derived and the modelled estimates as a mean over the period studied. Satellite and model data exhibit similar patterns of aerosol optical depth, radiative effect of aerosols, and atmospheric absorption of the aerosols. Recently published results show that global aerosol models have a tendency to underestimate the magnitude of the clear-sky direct radiative effect of aerosols over ocean compared to satellite-derived estimates. However, this is only to a small extent the case with the Oslo CTM2. The global mean direct radiative effect of aerosols over ocean is modelled with the Oslo CTM2 to be –5.5 Wm−2 and the atmospheric aerosol absorption 1.5 Wm−2.
Resumo:
We demonstrate that summer precipitation biases in the South Asian monsoon domain are sensitive to increasing the convective parametrisation’s entrainment and detrainment rates in the Met Office Unified Model. We explore this sensitivity to improve our understanding of the biases and inform efforts to improve convective parametrisation. We perform novel targeted experiments in which we increase the entrainment and detrainment rates in regions of especially large precipitation bias. We use these experiments to determine whether the sensitivity at a given location is a consequence of the local change to convection or is a remote response to the change elsewhere. We find that a local change leads to different mean-state responses in comparable regions. When the entrainment and detrainment rates are increased globally, feedbacks between regions usually strengthen the local responses. We choose two regions of tropical ascent that show different mean-state responses, the western equatorial Indian Ocean and western north Pacific, and analyse them as case studies to determine the mechanisms leading to the different responses. Our results indicate that several aspects of a region’s mean-state, including moisture content, sea surface temperature and circulation, play a role in local feedbacks that determine the response to increased entrainment and detrainment.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
The transfer of hillslope water to and through the riparian zone forms a research area of importance in hydrological investigations. Numerical modelling schemes offer a way to visualise and quantify first-order controls on catchment runoff response and mixing. We use a two-dimensional Finite Element model to assess the link between model setup decisions (e.g. zero-flux boundary definitions, soil algorithm choice) and the consequential hydrological process behaviour. A detailed understanding of the consequences of model configuration is required in order to produce reliable estimates of state variables. We demonstrate that model configuration decisions can determine effectively the presence or absence of particular hillslope flow processes and, the magnitude and direction of flux at the hillslope–riparian interface. If these consequences are not fully explored for any given scheme and application, the resulting process inference may well be misleading.
Resumo:
Using the GlobAEROSOL-AATSR dataset, estimates of the instantaneous, clear-sky, direct aerosol radiative effect and radiative forcing have been produced for the year 2006. Aerosol Robotic Network sun-photometer measurements have been used to characterise the random and systematic error in the GlobAEROSOL product for 22 regions covering the globe. Representative aerosol properties for each region were derived from the results of a wide range of literature sources and, along with the de-biased GlobAEROSOL AODs, were used to drive an offline version of the Met Office unified model radiation scheme. In addition to the mean AOD, best-estimate run of the radiation scheme, a range of additional calculations were done to propagate uncertainty estimates in the AOD, optical properties, surface albedo and errors due to the temporal and spatial averaging of the AOD fields. This analysis produced monthly, regional estimates of the clear-sky aerosol radiative effect and its uncertainty, which were combined to produce annual, global mean values of (−6.7±3.9)Wm−2 at the top of atmosphere (TOA) and (−12±6)Wm−2 at the surface. These results were then used to give estimates of regional, clear-sky aerosol direct radiative forcing, using modelled pre-industrial AOD fields for the year 1750 calculated for the AEROCOM PRE experiment. However, as it was not possible to quantify the uncertainty in the pre-industrial aerosol loading, these figures can only be taken as indicative and their uncertainties as lower bounds on the likely errors. Although the uncertainty on aerosol radiative effect presented here is considerably larger than most previous estimates, the explicit inclusion of the major sources of error in the calculations suggest that they are closer to the true constraint on this figure from similar methodologies, and point to the need for more, improved estimates of both global aerosol loading and aerosol optical properties.
Resumo:
The nuclear time-dependent Hartree-Fock model formulated in three-dimensional space, based on the full standard Skyrme energy density functional complemented with the tensor force, is presented. Full self-consistency is achieved by the model. The application to the isovector giant dipole resonance is discussed in the linear limit, ranging from spherical nuclei (16O and 120Sn) to systems displaying axial or triaxial deformation (24Mg, 28Si, 178Os, 190W and 238U). Particular attention is paid to the spin-dependent terms from the central sector of the functional, recently included together with the tensor. They turn out to be capable of producing a qualitative change on the strength distribution in this channel. The effect on the deformation properties is also discussed. The quantitative effects on the linear response are small and, overall, the giant dipole energy remains unaffected. Calculations are compared to predictions from the (quasi)-particle random-phase approximation and experimental data where available, finding good agreement
Resumo:
There is large diversity in simulated aerosol forcing among models that participated in the fifth Coupled Model Intercomparison Project (CMIP5), particularly related to aerosol interactions with clouds. Here we use the reported model data and fitted aerosol-cloud relations to separate the main sources of inter-model diversity in the magnitude of the cloud albedo effect. There is large diversity in the global load and spatial distribution of sulfate aerosol, as well as in global-mean cloud-top effective radius. The use of different parameterizations of aerosol-cloud interactions makes the largest contribution to diversity in modeled radiative forcing (up to -39%, +48% about the mean estimate). Uncertainty in pre-industrial sulfate load also makes a substantial contribution (-15%, +61% about the mean estimate), with smaller contributions from inter-model differences in the historical change in sulfate load and in mean cloud fraction.
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
The influence of the thalamus on the diversity of cortical activations is investigated in terms of the Ising model with respect to progressive levels of cortico-thalamic connectivity. The results show that better diversity is achieved at lower modulation levels, being higher than those obtained with counterpart network models.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.