884 resultados para religion-general (also see EU-Islam)
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
General circulation models (GCMs) use the laws of physics and an understanding of past geography to simulate climatic responses. They are objective in character. However, they tend to require powerful computers to handle vast numbers of calculations. Nevertheless, it is now possible to compare results from different GCMs for a range of times and over a wide range of parameterisations for the past, present and future (e.g. in terms of predictions of surface air temperature, surface moisture, precipitation, etc.). GCMs are currently producing simulated climate predictions for the Mesozoic, which compare favourably with the distributions of climatically sensitive facies (e.g. coals, evaporites and palaeosols). They can be used effectively in the prediction of oceanic upwelling sites and the distribution of petroleum source rocks and phosphorites. Models also produce evaluations of other parameters that do not leave a geological record (e.g. cloud cover, snow cover) and equivocal phenomena such as storminess. Parameterisation of sub-grid scale processes is the main weakness in GCMs (e.g. land surfaces, convection, cloud behaviour) and model output for continental interiors is still too cold in winter by comparison with palaeontological data. The sedimentary and palaeontological record provides an important way that GCMs may themselves be evaluated and this is important because the same GCMs are being used currently to predict possible changes in future climate. The Mesozoic Earth was, by comparison with the present, an alien world, as we illustrate here by reference to late Triassic, late Jurassic and late Cretaceous simulations. Dense forests grew close to both poles but experienced months-long daylight in warm summers and months-long darkness in cold snowy winters. Ocean depths were warm (8 degrees C or more to the ocean floor) and reefs, with corals, grew 10 degrees of latitude further north and south than at the present time. The whole Earth was warmer than now by 6 degrees C or more, giving more atmospheric humidity and a greatly enhanced hydrological cycle. Much of the rainfall was predominantly convective in character, often focused over the oceans and leaving major desert expanses on the continental areas. Polar ice sheets are unlikely to have been present because of the high summer temperatures achieved. The model indicates extensive sea ice in the nearly enclosed Arctic seaway through a large portion of the year during the late Cretaceous, and the possibility of sea ice in adjacent parts of the Midwest Seaway over North America. The Triassic world was a predominantly warm world, the model output for evaporation and precipitation conforming well with the known distributions of evaporites, calcretes and other climatically sensitive facies for that time. The message from the geological record is clear. Through the Phanerozoic, Earth's climate has changed significantly, both on a variety of time scales and over a range of climatic states, usually baldly referred to as "greenhouse" and "icehouse", although these terms disguise more subtle states between these extremes. Any notion that the climate can remain constant for the convenience of one species of anthropoid is a delusion (although the recent rate of climatic change is exceptional). (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this study, the mechanisms leading to the El Nino peak and demise are explored through a coupled general circulation model ensemble approach evaluated against observations. The results here suggest that the timing of the peak and demise for intense El Nino events is highly predictable as the evolution of the coupled system is strongly driven by a southward shift of the intense equatorial Pacific westerly anomalies during boreal winter. In fact, this systematic late-year shift drives an intense eastern Pacific thermocline shallowing, constraining a rapid El Nino demise in the following months. This wind shift results from a southward displacement in winter of the central Pacific warmest SSTs in response to the seasonal evolution of solar insolation. In contrast, the intensity of this seasonal feedback mechanism and its impact on the coupled system are significantly weaker in moderate El Nino events, resulting in a less pronounced thermocline shallowing. This shallowing transfers the coupled system into an unstable state in spring but is not sufficient to systematically constrain the equatorial Pacific evolution toward a rapid El Nino termination. However, for some moderate events, the occurrence of intense easterly wind anomalies in the eastern Pacific during that period initiate a rapid surge of cold SSTs leading to La Nina conditions. In other cases, weaker trade winds combined with a slightly deeper thermocline allow the coupled system to maintain a broad warm phase evolving through the entire spring and summer and a delayed El Nino demise, an evolution that is similar to the prolonged 1986/87 El Nino event. La Nina events also show a similar tendency to peak in boreal winter, with characteristics and mechanisms mainly symmetric to those described for moderate El Nino cases.
Resumo:
Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.
Resumo:
The multidecadal variability of El Niño–Southern Oscillation (ENSO)–South Asian monsoon relationship is elucidated in a 1000 year control simulation of a coupled general circulation model. The results indicate that the Atlantic Multidecadal Oscillation (AMO), resulting from the natural fluctuation of the Atlantic Meridional Overturning Circulation (AMOC), plays an important role in modulating the multidecadal variation of the ENSO-monsoon relationship. The sea surface temperature anomalies associated with the AMO induce not only significant climate impact in the Atlantic but also the coupled feedbacks in the tropical Pacific regions. The remote responses in the Pacific Ocean to a positive phase of the AMO which is resulted from enhanced AMOC in the model simulation and are characterized by statistically significant warming in the North Pacific and in the western tropical Pacific, a relaxation of tropical easterly trades in the central and eastern tropical Pacific, and a deeper thermocline in the eastern tropical Pacific. These changes in mean states lead to a reduction of ENSO variability and therefore a weakening of the ENSO-monsoon relationship. This study suggests a nonlocal mechanism for the low-frequency fluctuation of the ENSO-monsoon relationship, although the AMO explains only a fraction of the ENSO–South Asian monsoon variation on decadal-multidecadal timescale. Given the multidecadal variation of the AMOC and therefore of the AMO exhibit decadal predictability, this study highlights the possibility that a part of the change of climate variability in the Pacific Ocean and its teleconnection may be predictable.
Resumo:
Studies of construction labour productivity have revealed that limited predictability and multi-agent social complexity make long-range planning of construction projects extremely inaccurate. Fire-fighting, a cultural feature of construction project management, social and structural diversity of involved permanent organizations, and structural temporality all contribute towards relational failures and frequent changes. The main purpose of this paper is therefore to demonstrate that appropriate construction planning may have a profound synergistic effect on structural integration of a project organization. Using the general systems theory perspective it is further a specific objective to investigate and evaluate organizational effects of changes in planning and potentials for achieving continuous project-organizational synergy. The newly developed methodology recognises that planning should also represent a continuous, improvement-leading driving force throughout a project. The synergistic effect of the process planning membership duality fostered project-wide integration, eliminated internal boundaries, and created a pool of constantly upgrading knowledge. It maintained a creative environment that resulted in a number of process-related improvements from all parts of the organization. As a result labour productivity has seen increases of more than 30%, profits have risen from an average of 12% to more than 18%, and project durations have been reduced by several days.
Resumo:
Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
The aim of this review article is to provide an overview of the role of pigs as a biomedical model for humans. The usefulness and limitations of porcine models have been discussed in terms of metabolic, cardiovascular, digestive and bone diseases in humans. Domestic pigs and minipigs are the main categories of pigs used as biomedical models. One drawback of minipigs is that they are in short supply and expensive compared with domestic pigs, which in contrast cost more to house, feed and medicate. Different porcine breeds show different responses to the induction of specific diseases. For example, ossabaw minipigs provide a better model than Yucatan for the metabolic syndrome as they exhibit obesity, insulin resistance and hypertension, all of which are absent in the Yucatan. Similar metabolic/physiological differences exist between domestic breeds (e.g. Meishan v. Pietrain). The modern commercial (e.g. Large White) domestic pig has been the preferred model for developmental programming due to the 2- to 3-fold variation in body weight among littermates providing a natural form of foetal growth retardation not observed in ancient (e.g. Meishan) domestic breeds. Pigs have been increasingly used to study chronic ischaemia, therapeutic angiogenesis, hypertrophic cardiomyopathy and abdominal aortic aneurysm as their coronary anatomy and physiology are similar to humans. Type 1 and II diabetes can be induced in swine using dietary regimes and/or administration of streptozotocin. Pigs are a good and extensively used model for specific nutritional studies as their protein and lipid metabolism is comparable with humans, although pigs are not as sensitive to protein restriction as rodents. Neonatal and weanling pigs have been used to examine the pathophysiology and prevention/treatment of microbial-associated diseases and immune system disorders. A porcine model mimicking various degrees of prematurity in infants receiving total parenteral nutrition has been established to investigate gut development, amino acid metabolism and non-alcoholic fatty liver disease. Endoscopic therapeutic methods for upper gastrointestinal tract bleeding are being developed. Bone remodelling cycle in pigs is histologically more similar to humans than that of rats or mice, and is used to examine the relationship between menopause and osteoporosis. Work has also been conducted on dental implants in pigs to consider loading; however with caution as porcine bone remodels slightly faster than human bone. We conclude that pigs are a valuable translational model to bridge the gap between classical rodent models and humans in developing new therapies to aid human health.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
The hypothesis of a low dimensional martian climate attractor is investigated by the application of the proper orthogonal decomposition (POD) to a simulation of martian atmospheric circulation using the UK Mars general circulation model (UK-MGCM). In this article we focus on a time series of the interval between autumn and winter in the northern hemisphere, when baroclinic activity is intense. The POD is a statistical technique that allows the attribution of total energy (TE) to particular structures embedded in the UK-MGCM time-evolving circulation. These structures are called empirical orthogonal functions (EOFs). Ordering the EOFs according to their associated energy content, we were able to determine the necessary number to account for a chosen amount of atmospheric TE. We show that for Mars a large fraction of TE is explained by just a few EOFs (with 90% TE in 23 EOFs), which apparently support the initial hypothesis. We also show that the resulting EOFs represent classical types of atmospheric motion, such as thermal tides and transient waves. Thus, POD is shown to be an efficient method for the identification of different classes of atmospheric modes. It also provides insight into the non-linear interaction of these modes.
Resumo:
In this study we quantify the relationship between the aerosol optical depth increase from a volcanic eruption and the severity of the subsequent surface temperature decrease. This investigation is made by simulating 10 different sizes of eruption in a global circulation model (GCM) by changing stratospheric sulfate aerosol optical depth at each time step. The sizes of the simulated eruptions range from Pinatubo‐sized up to the magnitude of supervolcanic eruptions around 100 times the size of Pinatubo. From these simulations we find that there is a smooth monotonic relationship between the global mean maximum aerosol optical depth anomaly and the global mean temperature anomaly and we derive a simple mathematical expression which fits this relationship well. We also construct similar relationships between global mean aerosol optical depth and the temperature anomaly at every individual model grid box to produce global maps of best‐fit coefficients and fit residuals. These maps are used with caution to find the eruption size at which a local temperature anomaly is clearly distinct from the local natural variability and to approximate the temperature anomalies which the model may simulate following a Tambora‐sized eruption. To our knowledge, this is the first study which quantifies the relationship between aerosol optical depth and resulting temperature anomalies in a simple way, using the wealth of data that is available from GCM simulations.
Resumo:
Satellite data are used to quantify and examine the bias in the outgoing long-wave (LW) radiation over North Africa during May–July simulated by a range of climate models and the Met Office global numerical weather prediction (NWP) model. Simulations from an ensemble-mean of multiple climate models overestimate outgoing clear-sky long-wave radiation (LWc) by more than 20 W m−2 relative to observations from Clouds and the Earth's Radiant Energy System (CERES) for May–July 2000 over parts of the west Sahara, and by 9 W m−2 for the North Africa region (20°W–30°E, 10–40°N). Experiments with the atmosphere-only version of the High-resolution Hadley Centre Global Environment Model (HiGEM), suggest that including mineral dust radiative effects removes this bias. Furthermore, only by reducing surface temperature and emissivity by unrealistic amounts is it possible to explain the magnitude of the bias. Comparing simulations from the Met Office NWP model with satellite observations from Geostationary Earth Radiation Budget (GERB) instruments suggests that the model overestimates the LW by 20–40 W m−2 during North African summer. The bias declines over the period 2003–2008, although this is likely to relate to improvements in the model and inhomogeneity in the satellite time series. The bias in LWc coincides with high aerosol dust loading estimated from the Ozone Monitoring Instrument (OMI), including during the GERBILS field campaign (18–28 June 2007) where model overestimates in LWc greater than 20 W m−2 and OMI-estimated aerosol optical depth (AOD) greater than 0.8 are concurrent around 20°N, 0–20°W. A model-minus-GERB LW bias of around 30 W m−2 coincides with high AOD during the period 18–21 June 2007, although differences in cloud cover also impact the model–GERB differences. Copyright © Royal Meteorological Society and Crown Copyright, 2010
Resumo:
This study considers the consistency of the role of both the private and public real estate markets within a mixed-asset context. While a vast literature has developed that has examined the potential role of both the private and public real estate markets, most studies have largely relied on both single time horizons and single sample periods. This paper builds upon the analysis of Lee and Stevenson (2005) who examined the consistency of REITs in a US capital market portfolio. The current paper extends that by also analyzing the role of the private market. To address the question, the allocation of both the private and traded markets is evaluated over different holding periods varying from 5- to 20-years. In general the results show that optimum mixed-asset portfolios already containing private real estate have little place for public real estate securities, especially in low risk portfolios and for longer investment horizons. Additionally, mixed-asset portfolios with public real estate either see the allocations to REITs diminished or eliminated if private real estate is also considered. The results demonstrate that there is a still a strong case for private real estate in the mixed-asset portfolio on the basis of an increase in risk-adjusted performance, even if the investor is already holding REITs, but that the reverse is not always the case.
Resumo:
This special issue conceives of “Shakespeare and Islam” in its broadest sense, conceptually, and opens up the conjunction to consideration of both the early modern and more recent periods. It is not directly concerned with addressing doctrinal questions: “Islam” is a flag of convenience for our purposes, an umbrella term that takes in not only the Ottoman Empire but also the Persian (a subject that, perhaps unsurprisingly, tends to be overshadowed by its stronger neighbour), and extends to a discussion of twentieth- and twenty-first-century issues of Shakespearean interpretation. In line with this journal's principal remit, the essays concentrate on questions of staging and interpretation, adaptation and appropriation, thus drawing on and contributing to one of the dominant fields of Shakespeare studies today. While the early modern period remains the collection's central interest, two concluding essays remind us (if we need reminding) that the seemingly endless recycling and reinterpretation of Shakespeare have implications for how we understand the conjunction with Islam today.