2 resultados para large eddy simulation
em Universidade Complutense de Madrid
Resumo:
The ECHAM-1 T21/LSG coupled ocean-atmosphere general circulation model (GCM) is used to simulate climatic conditions at the last interglacial maximum (Eemian. 125 kyr BP). The results reflect thc expected surface temperature changes (with respect to the control run) due to the amplification (reduction) of the seasonal cycle of insolation in the Northern (Southern) Hemisphere. A number of simulated features agree with previous results from atmospheric GCM simulations e.g. intensified summer southwest monsoons) except in the Northern Hemisphere poleward of 30 degrees N. where dynamical feedback, in the North Atlantic and North Pacific increase zonal temperatures about 1 degrees C above what would be predicted from simple energy balance considerations. As this is the same area where most of the terrestrial geological data originate, this result suggests that previous estimates of Eemian global average temperature might have been biased by sample distribution. This conclusion is supported by the fact that the estimated global temperature increase of only 0.3 degrees C greater than the control run ha, been previously shown to be consistent a with CLIMAP sea surface temperature estimates. Although the Northern Hemisphere summer monsoon is intensified. globally averaged precipitation over land is within about 1% of the present, contravening some geological inferences bur not the deep-sea delta(13)C estimates of terrestrial carbon storage changes. Winter circulation changes in the northern Arabian Sea. driven by strong cooling on land, are as large as summer circulation changes that are the usual focus of interest, suggesting that interpreting variations in the Arabian Sea. sedimentary record solely in terms of the summer monsoon response could sometimes lead to errors. A small monsoonal response over northern South America suggests that interglacial paleotrends in this region were not just due to El Nino variations.
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.