857 resultados para fixed-effects model
Resumo:
Projections of stratospheric ozone from a suite of chemistry-climate models (CCMs) have been analyzed. In addition to a reference simulation where anthropogenic halogenated ozone depleting substances (ODSs) and greenhouse gases (GHGs) vary with time, sensitivity simulations with either ODS or GHG concentrations fixed at 1960 levels were performed to disaggregate the drivers of projected ozone changes. These simulations were also used to assess the two distinct milestones of ozone returning to historical values (ozone return dates) and ozone no longer being influenced by ODSs (full ozone recovery). The date of ozone returning to historical values does not indicate complete recovery from ODSs in most cases, because GHG-induced changes accelerate or decelerate ozone changes in many regions. In the upper stratosphere where CO2-induced stratospheric cooling increases ozone, full ozone recovery is projected to not likely have occurred by 2100 even though ozone returns to its 1980 or even 1960 levels well before (~2025 and 2040, respectively). In contrast, in the tropical lower stratosphere ozone decreases continuously from 1960 to 2100 due to projected increases in tropical upwelling, while by around 2040 it is already very likely that full recovery from the effects of ODSs has occurred, although ODS concentrations are still elevated by this date. In the midlatitude lower stratosphere the evolution differs from that in the tropics, and rather than a steady decrease in ozone, first a decrease in ozone is simulated from 1960 to 2000, which is then followed by a steady increase through the 21st century. Ozone in the midlatitude lower stratosphere returns to 1980 levels by ~2045 in the Northern Hemisphere (NH) and by ~2055 in the Southern Hemisphere (SH), and full ozone recovery is likely reached by 2100 in both hemispheres. Overall, in all regions except the tropical lower stratosphere, full ozone recovery from ODSs occurs significantly later than the return of total column ozone to its 1980 level. The latest return of total column ozone is projected to occur over Antarctica (~2045–2060) whereas it is not likely that full ozone recovery is reached by the end of the 21st century in this region. Arctic total column ozone is projected to return to 1980 levels well before polar stratospheric halogen loading does so (~2025–2030 for total column ozone, cf. 2050–2070 for Cly+60×Bry) and it is likely that full recovery of total column ozone from the effects of ODSs has occurred by ~2035. In contrast to the Antarctic, by 2100 Arctic total column ozone is projected to be above 1960 levels, but not in the fixed GHG simulation, indicating that climate change plays a significant role.
A model-based assessment of the effects of projected climate change on the water resources of Jordan
Resumo:
This paper is concerned with the quantification of the likely effect of anthropogenic climate change on the water resources of Jordan by the end of the twenty-first century. Specifically, a suite of hydrological models are used in conjunction with modelled outcomes from a regional climate model, HadRM3, and a weather generator to determine how future flows in the upper River Jordan and in the Wadi Faynan may change. The results indicate that groundwater will play an important role in the water security of the country as irrigation demands increase. Given future projections of reduced winter rainfall and increased near-surface air temperatures, the already low groundwater recharge will decrease further. Interestingly, the modelled discharge at the Wadi Faynan indicates that extreme flood flows will increase in magnitude, despite a decrease in the mean annual rainfall. Simulations projected no increase in flood magnitude in the upper River Jordan. Discussion focuses on the utility of the modelling framework, the problems of making quantitative forecasts and the implications of reduced water availability in Jordan.
Resumo:
The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.
Resumo:
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467-475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters' prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters' prior beliefs. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we list some new orthogonal main effects plans for three-level designs for 4, 5 and 6 factors in IS runs and compare them with designs obtained from the existing L-18 orthogonal array. We show that these new designs have better projection properties and can provide better parameter estimates for a range of possible models. Additionally, we study designs in other smaller run-sizes when there are insufficient resources to perform an 18-run experiment. Plans for three-level designs for 4, 5 and 6 factors in 13 to 17 runs axe given. We show that the best designs here are efficient and deserve strong consideration in many practical situations.
Resumo:
The feature model of immediate memory (Nairne, 1990) is applied to an experiment testing individual differences in phonological confusions amongst a group (N=100) of participants performing a verbal memory test. By simulating the performance of an equivalent number of “pseudo-participants” the model fits both the mean performance and the variability within the group. Experimental data show that high-performing individuals are significantly more likely to demonstrate phonological confusions than low performance individuals and this is also true of the model, despite the model’s lack of either an explicit phonological store or a performance-linked strategy shift away from phonological storage. It is concluded that a dedicated phonological store is not necessary to explain the basic phonological confusion effect, and the reduction in such an effect can also be explained without requiring a change in encoding or rehearsal strategy or the deployment of a different storage buffer.
Resumo:
This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).
Resumo:
Estimating snow mass at continental scales is difficult but important for understanding landatmosphere interactions, biogeochemical cycles and Northern latitudes’ hydrology. Remote sensing provides the only consistent global observations, but the uncertainty in measurements is poorly understood. Existing techniques for the remote sensing of snow mass are based on the Chang algorithm, which relates the absorption of Earth-emitted microwave radiation by a snow layer to the snow mass within the layer. The absorption also depends on other factors such as the snow grain size and density, which are assumed and fixed within the algorithm. We examine the assumptions, compare them to field measurements made at the NASA Cold Land Processes Experiment (CLPX) Colorado field site in 2002–3, and evaluate the consequences of deviation and variability for snow mass retrieval. The accuracy of the emission model used to devise the algorithm also has an impact on its accuracy, so we test this with the CLPX measurements of snow properties against SSM/I and AMSR-E satellite measurements.
Resumo:
In a recent study, Williams introduced a simple modification to the widely used Robert–Asselin (RA) filter for numerical integration. The main purpose of the Robert–Asselin–Williams (RAW) filter is to avoid the undesired numerical damping of the RA filter and to increase the accuracy. In the present paper, the effects of the modification are comprehensively evaluated in the Simplified Parameterizations, Primitive Equation Dynamics (SPEEDY) atmospheric general circulation model. First, the authors search for significant changes in the monthly climatology due to the introduction of the new filter. After testing both at the local level and at the field level, no significant changes are found, which is advantageous in the sense that the new scheme does not require a retuning of the parameterized model physics. Second, the authors examine whether the new filter improves the skill of short- and medium-term forecasts. January 1982 data from the NCEP–NCAR reanalysis are used to evaluate the forecast skill. Improvements are found in all the model variables (except the relative humidity, which is hardly changed). The improvements increase with lead time and are especially evident in medium-range forecasts (96–144 h). For example, in tropical surface pressure predictions, 5-day forecasts made using the RAW filter have approximately the same skill as 4-day forecasts made using the RA filter. The results of this work are encouraging for the implementation of the RAW filter in other models currently using the RA filter.
Resumo:
Estimating snow mass at continental scales is difficult, but important for understanding land-atmosphere interactions, biogeochemical cycles and the hydrology of the Northern latitudes. Remote sensing provides the only consistent global observations, butwith unknown errors. Wetest the theoretical performance of the Chang algorithm for estimating snow mass from passive microwave measurements using the Helsinki University of Technology (HUT) snow microwave emission model. The algorithm's dependence upon assumptions of fixed and uniform snow density and grainsize is determined, and measurements of these properties made at the Cold Land Processes Experiment (CLPX) Colorado field site in 2002–2003 used to quantify the retrieval errors caused by differences between the algorithm assumptions and measurements. Deviation from the Chang algorithm snow density and grainsize assumptions gives rise to an error of a factor of between two and three in calculating snow mass. The possibility that the algorithm performsmore accurately over large areas than at points is tested by simulating emission from a 25 km diameter area of snow with a distribution of properties derived from the snow pitmeasurements, using the Chang algorithm to calculate mean snow-mass from the simulated emission. The snowmass estimation froma site exhibiting the heterogeneity of the CLPX Colorado site proves onlymarginally different than that from a similarly-simulated homogeneous site. The estimation accuracy predictions are tested using the CLPX field measurements of snow mass, and simultaneous SSM/I and AMSR-E measurements.
Resumo:
Aircraft systems are highly nonlinear and time varying. High-performance aircraft at high angles of incidence experience undesired coupling of the lateral and longitudinal variables, resulting in departure from normal controlled flight. The aim of this work is to construct a robust closed-loop control that optimally extends the stable and decoupled flight envelope. For the study of these systems nonlinear analysis methods are needed. Previously, bifurcation techniques have been used mainly to analyze open-loop nonlinear aircraft models and investigate control effects on dynamic behavior. In this work linear feedback control designs calculated by eigenstructure assignment methods are investigated for a simple aircraft model at a fixed flight condition. Bifurcation analysis in conjunction with linear control design methods is shown to aid control law design for the nonlinear system.
Resumo:
Evidence from in vivo and in vitro studies suggests that the consumption of pro- and prebiotics may inhibit colon carcinogenesis; however, the mechanisms involved have, thus far, proved elusive. There are some indications from animal studies that the effects are being exerted during the promotion stage of carcinogenesis. One feature of the promotion stage of colorectal cancer is the disruption of tight junctions, leading to a loss of integrity across the intestinal barrier. We have used the Caco-2 human adenocarcinoma cell line as a model for the intestinal epithelia. Trans-epithelial electrical resistance measurements indicate Caco-2 monolayer integrity, and we recorded changes to this integrity following exposure to the fermentation products of selected probiotics and prebiotics, in the form of nondigestible oligosaccharides (NDOs). Our results indicate that NDOs themselves exert varying, but generally minor, effects upon the strength of the tight junctions, whereas the fermentation products of probiotics and NDOs tend to raise tight junction integrity above that of the controls. This effect was bacterial species and oligosaccharide specific. Bifidobacterium Bb 12 was particularly effective, as were the fermentation products of Raftiline and Raftilose. We further investigated the ability of Raftilose fermentations to protect against the negative effects of deoxycholic acid (DCA) upon tight junction integrity. We found protection to be species dependent and dependent upon the presence of the fermentation products in the media at the same time as or after exposure to the DCA. Results suggest that the Raftilose fermentation products may prevent disruption of the intestinal epithelial barrier function during damage by tumor promoters.