978 resultados para Factorial experiment designs
Resumo:
This paper explores the sensitivity of Atmospheric General Circulation Model (AGCM) simulations to changes in the meridional distribution of sea surface temperature (SST). The simulations are for an aqua-planet, a water covered Earth with no land, orography or sea-ice and with specified zonally symmetric SST. Simulations from 14 AGCMs developed for Numerical Weather Prediction and climate applications are compared. Four experiments are performed to study the sensitivity to the meridional SST profile. These profiles range from one in which the SST gradient continues to the equator to one which is flat approaching the equator, all with the same maximum SST at the equator. The zonal mean circulation of all models shows strong sensitivity to latitudinal distribution of SST. The Hadley circulation weakens and shifts poleward as the SST profile flattens in the tropics. One question of interest is the formation of a double versus a single ITCZ. There is a large variation between models of the strength of the ITCZ and where in the SST experiment sequence they transition from a single to double ITCZ. The SST profiles are defined such that as the equatorial SST gradient flattens, the maximum gradient increases and moves poleward. This leads to a weakening of the mid-latitude jet accompanied by a poleward shift of the jet core. Also considered are tropical wave activity and tropical precipitation frequency distributions. The details of each vary greatly between models, both with a given SST and in the response to the change in SST. One additional experiment is included to examine the sensitivity to an off-equatorial SST maximum. The upward branch of the Hadley circulation follows the SST maximum off the equator. The models that form a single precipitation maximum when the maximum SST is on the equator shift the precipitation maximum off equator and keep it centered over the SST maximum. Those that form a double with minimum on the equatorial maximum SST shift the double structure off the equator, keeping the minimum over the maximum SST. In both situations only modest changes appear in the shifted profile of zonal average precipitation. When the upward branch of the Hadley circulation moves into the hemisphere with SST maximum, the zonal average zonal, meridional and vertical winds all indicate that the Hadley cell in the other hemisphere dominates.
Resumo:
Climate simulations by 16 atmospheric general circulation models (AGCMs) are compared on an aqua-planet, a water-covered Earth with prescribed sea surface temperature varying only in latitude. The idealised configuration is designed to expose differences in the circulation simulated by different models. Basic features of the aqua-planet climate are characterised by comparison with Earth. The models display a wide range of behaviour. The balanced component of the tropospheric mean flow, and mid-latitude eddy covariances subject to budget constraints, vary relatively little among the models. In contrast, differences in damping in the dynamical core strongly influence transient eddy amplitudes. Historical uncertainty in modelled lower stratospheric temperatures persists in APE. Aspects of the circulation generated more directly by interactions between the resolved fluid dynamics and parameterized moist processes vary greatly. The tropical Hadley circulation forms either a single or double inter-tropical convergence zone (ITCZ) at the equator, with large variations in mean precipitation. The equatorial wave spectrum shows a wide range of precipitation intensity and propagation characteristics. Kelvin mode-like eastward propagation with remarkably constant phase speed dominates in most models. Westward propagation, less dispersive than the equatorial Rossby modes, dominates in a few models or occurs within an eastward propagating envelope in others. The mean structure of the ITCZ is related to precipitation variability, consistent with previous studies. The aqua-planet global energy balance is unknown but the models produce a surprisingly large range of top of atmosphere global net flux, dominated by differences in shortwave reflection by clouds. A number of newly developed models, not optimised for Earth climate, contribute to this. Possible reasons for differences in the optimised models are discussed. The aqua-planet configuration is intended as one component of an experimental hierarchy used to evaluate AGCMs. This comparison does suggest that the range of model behaviour could be better understood and reduced in conjunction with Earth climate simulations. Controlled experimentation is required to explore individual model behaviour and investigate convergence of the aqua-planet climate with increasing resolution.
Resumo:
A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.
Resumo:
Our knowledge of stratospheric O3-N2O correlations is extended, and their potential for model-measurement comparison assessed, using data from the Atmospheric Chemistry Experiment (ACE) satellite and the Canadian Middle Atmosphere Model (CMAM). ACE provides the first comprehensive data set for the investigation of interhemispheric, interseasonal, and height-resolved differences of the O_3-N_2O correlation structure. By subsampling the CMAM data, the representativeness of the ACE data is evaluated. In the middle stratosphere, where the correlations are not compact and therefore mainly reflect the data sampling, joint probability density functions provide a detailed picture of key aspects of transport and mixing, but also trace polar ozone loss. CMAM captures these important features, but exhibits a displacement of the tropical pipe into the Southern Hemisphere (SH). Below about 21 km, the ACE data generally confirm the compactness of the correlations, although chemical ozone loss tends to destroy the compactness during late winter/spring, especially in the SH. This allows a quantitative comparison of the correlation slopes in the lower and lowermost stratosphere (LMS), which exhibit distinct seasonal cycles that reveal the different balances between diabatic descent and horizontal mixing in these two regions in the Northern Hemisphere (NH), reconciling differences found in aircraft measurements, and the strong role of chemical ozone loss in the SH. The seasonal cycles are qualitatively well reproduced by CMAM, although their amplitude is too weak in the NH LMS. The correlation slopes allow a "chemical" definition of the LMS, which is found to vary substantially in vertical extent with season.
Resumo:
Despite many decades investigating scalp recordable 8–13-Hz (alpha) electroencephalographic activity, no consensus has yet emerged regarding its physiological origins nor its functional role in cognition. Here we outline a detailed, physiologically meaningful, theory for the genesis of this rhythm that may provide important clues to its functional role. In particular we find that electroencephalographically plausible model dynamics, obtained with physiological admissible parameterisations, reveals a cortex perched on the brink of stability, which when perturbed gives rise to a range of unanticipated complex dynamics that include 40-Hz (gamma) activity. Preliminary experimental evidence, involving the detection of weak nonlinearity in resting EEG using an extension of the well-known surrogate data method, suggests that nonlinear (deterministic) dynamics are more likely to be associated with weakly damped alpha activity. Thus rather than the “alpha rhythm” being an idling rhythm it may be more profitable to conceive it as a readiness rhythm.
Resumo:
Winter cyclone activity over the Northern Hemisphere is investigated in an ECHAM4/OPYC3 greenhouse gas scenario simulation. The goal of this investigation is to identify changes in cyclone activity associated with increasing concentrations. To this aim, two 50-year time periods are analysed, one representing present day climate conditions and the other a perturbed climate when CO2 concentrations exceed twice the present concentrations. Cyclone activity is assessed using an automatic algorithm, which identifies and tracks cyclones based on sea level pressure fields. The algorithm detects not only large and long living cyclones over the main ocean basins, but also their smaller counterparts in secondary storm track regions like the Mediterranean Basin. For the present climate, results show a good agreement with NCEP-reanalysis, provided that the spectral and time resolutions of the reanalysis are reduced to those available for the model. Several prominent changes in cyclone activity are observed for the scenario period in comparison to the present day climate, especially over the main ocean basins. A significant decrease of overall cyclone track density is found between 35 and 55 degrees North, together with a small increase polewards. These changes result from two different signals for deep and medium cyclones: for deep cyclones (core pressure below 990 hPa) there is a poleward shift in the greenhouse gas scenario, while for medium cyclones (core pressure between 990 and 1010 hPa) a general decrease in cyclone counts is found. The same kind of changes (a shift for intense cyclones and an overall decrease for the weaker ones) are detected when distinguishing cyclones from their intensity, quantified in terms of ∇2p. Thus, the simulated changes can not solely be attributed to alterations in mean sea level pressure. Instead, corresponding increases in upper-tropospheric baroclinicity suggest more favourable conditions for the development of stronger systems at higher latitudes, especially at the delta regions of the North Atlantic and the North Pacific storm tracks.
Resumo:
Hybrid multiprocessor architectures which combine re-configurable computing and multiprocessors on a chip are being proposed to transcend the performance of standard multi-core parallel systems. Both fine-grained and coarse-grained parallel algorithm implementations are feasible in such hybrid frameworks. A compositional strategy for designing fine-grained multi-phase regular processor arrays to target hybrid architectures is presented in this paper. The method is based on deriving component designs using classical regular array techniques and composing the components into a unified global design. Effective designs with phase-changes and data routing at run-time are characteristics of these designs. In order to describe the data transfer between phases, the concept of communication domain is introduced so that the producer–consumer relationship arising from multi-phase computation can be treated in a unified way as a data routing phase. This technique is applied to derive new designs of multi-phase regular arrays with different dataflow between phases of computation.
Resumo:
Lying to participants offers an experimenter the enticing prospect of making “others' behaviour” a controlled variable, but is eschewed by experimental economists because it may pollute the pool of subjects. This paper proposes and implements a new experimental design, the Conditional Information Lottery, which offers all the benefits of deception without actually deceiving anyone. The design should be suitable for most economics experiments, and works by a modification of an already standard device, the Random Lottery incentive system. The deceptive scenarios of designs which use deceit are replaced with fictitious scenarios, each of which, from a subject's viewpoint, has a chance of being true. The design is implemented in a sequential play public good experiment prompted by Weimann's (1994) result, from a deceptive design, that subjects are more sensitive to freeriding than cooperation on the part of others. The experiment provides similar results to Weimann's, in that subjects are at least as cooperative when uninformed about others' behaviour as they are if reacting to high contributions. No deception is used and the data cohere well both internally and with other public goods experiments. In addition, simultaneous play is found to be more efficient than sequential play, and subjects contribute less at the end of a sequence than at the start. The results suggest pronounced elements of overconfidence, egoism and (biased) reciprocity in behaviour, which may explain decay in contributions in repeated play designs. The experiment shows there is a workable alternative to deception.
Resumo:
Forgetting immediate physical reality and having awareness of one�s location in the simulated world is critical to enjoyment and performance in virtual environments be it an interactive 3D game such as Quake or an online virtual 3d community space such as Second Life. Answer to the question "where am I?" at two levels, whether the locus is in the immediate real world as opposed to the virtual world and whether one is aware of the spatial co-ordinates of that locus, hold the key to any virtual 3D experience. While 3D environments, especially virtual environments and their impact on spatial comprehension has been studied in disciplines such as architecture, it is difficult to determine the relative contributions of specific attributes such as screen size or stereoscopy towards spatial comprehension since most of them treat the technology as monolith (box-centered). Using a variable-centered approach put forth by Nass and Mason (1990) which breaks down the technology into its component variables and their corresponding values as its theoretical basis, this paper looks at the contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) common to most virtual environments on spatial comprehension and presence. The variable centered approach can be daunting as the increase in the number of variables can exponentially increase the number of conditions and resources required. We overcome this drawback posed by adoption of such a theoretical approach by the use of a fractional factorial design for the experiment. This study has completed the first wave of data collection and starting the next phase in January 2007 and expected to complete by February 2007. Theoretical and practical implications of the study are discussed.
Resumo:
Factorial pot experiments were conducted to compare the responses of GA-sensitive and GA-insensitive reduced height (Rht) alleles in wheat for susceptibility to heat and drought stress during booting and anthesis. Grain set (grains/spikelet) of near isogenic lines (NILs) was assessed following three day transfers to controlled environments imposing day temperatures (t) from 20 to 40°C. Transfers were during booting and/or anthesis and pots maintained at field capacity (FC) or had water withheld. Logistic responses (y = c/1+e-b(t -m)) described declining grain set with increasing t, and t5 was that fitted to give a 5% reduction in grain set. Averaged over NIL, t5 for anthesis at FC was 31.7±0.47°C (S.E.M, 26 d.f.). Drought at anthesis reduced t5 by <2°C. Maintaining FC at booting conferred considerable resistance to high temperatures (t5=33.9°C) but booting was particularly heat susceptible without water (t5 =26.5°C). In one background (cv. Mercia), for NILs varying at the Rht-D1 locus, there was progressive reduction in t5 with dwarfing and reduced gibberellic acid (GA) sensitivity (Rht-D1a, tall, 32.7±0.72; Rht-D1b, semi-dwarf, 29.5±0.85; Rht-D1c, severe dwarf, 24.2±0.72). This trend was not evident for the Rht-B1 locus, or for Rht-D1b in an alternative background (Maris Widgeon). The GA-sensitive severe dwarf Rht12 was more heat tolerant (t5=29.4±0.72) than the similarly statured GA-insensitive Rht-D1c. The GA-sensitive, semi-dwarfing Rht8 conferred greater drought tolerance in one experiment. Despite the effects of Rht-D1 alleles in Mercia on stress tolerance, the inconsistency of the effects over background and locus led to the conclusion that semi-dwarfing with GA-insensitivity did not necessarily increase sensitivity to stress at booting and flowering. In comparison to effects of semi-dwarfing alleles, responses to heat stress are much more dramatically affected by water availability and the precise growth stage at which the stress is experienced by the plants.
Resumo:
We apply an alternating proposals protocol with a confirmation stage as a way of solving a Prisoner’s Dilemma game. We interpret players’ proposals and (no) confirmation of outcomes of the game as a tacit communication device. The protocol leads to unprecedented high levels of cooperation in the laboratory. Assigning the power of confirmation to one of the two players alone, rather than alternating the role of a leader significantly increases the probability of signing a cooperative agreement in the first bargaining period. We interpret pre-agreement strategies as tacit messages on players’ willingness to cooperate and on their beliefs about the others’ type.