24 resultados para departures
Resumo:
This document outlines a practical strategy for achieving an observationally based quantification of direct climate forcing by anthropogenic aerosols. The strategy involves a four-step program for shifting the current assumption-laden estimates to an increasingly empirical basis using satellite observations coordinated with suborbital remote and in situ measurements and with chemical transport models. Conceptually, the problem is framed as a need for complete global mapping of four parameters: clear-sky aerosol optical depth δ, radiative efficiency per unit optical depth E, fine-mode fraction of optical depth ff, and the anthropogenic fraction of the fine mode faf. The first three parameters can be retrieved from satellites, but correlative, suborbital measurements are required for quantifying the aerosol properties that control E, for validating the retrieval of ff, and for partitioning fine-mode δ between natural and anthropogenic components. The satellite focus is on the “A-Train,” a constellation of six spacecraft that will fly in formation from about 2005 to 2008. Key satellite instruments for this report are the Moderate Resolution Imaging Spectroradiometer (MODIS) and Clouds and the Earth's Radiant Energy System (CERES) radiometers on Aqua, the Ozone Monitoring Instrument (OMI) radiometer on Aura, the Polarization and Directionality of Earth's Reflectances (POLDER) polarimeter on the Polarization and Anistropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL), and the Cloud and Aerosol Lider with Orthogonal Polarization (CALIOP) lidar on the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). This strategy is offered as an initial framework—subject to improvement over time—for scientists around the world to participate in the A-Train opportunity. It is a specific implementation of the Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) program, presented earlier in this journal, which identified the integration of diverse data as the central challenge to progress in quantifying global-scale aerosol effects. By designing a strategy around this need for integration, we develop recommendations for both satellite data interpretation and correlative suborbital activities that represent, in many respects, departures from current practice
Resumo:
The European Centre for Medium-range Weather Forecast (ECMWF) provides an aerosol re-analysis starting from year 2003 for the Monitoring Atmospheric Composition and Climate (MACC) project. The re-analysis assimilates total aerosol optical depth retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) to correct for model departures from observed aerosols. The reanalysis therefore combines satellite retrievals with the full spatial coverage of a numerical model. Re-analysed products are used here to estimate the shortwave direct and first indirect radiative forcing of anthropogenic aerosols over the period 2003–2010, using methods previously applied to satellite retrievals of aerosols and clouds. The best estimate of globally-averaged, all-sky direct radiative forcing is −0.7±0.3Wm−2. The standard deviation is obtained by a Monte-Carlo analysis of uncertainties, which accounts for uncertainties in the aerosol anthropogenic fraction, aerosol absorption, and cloudy-sky effects. Further accounting for differences between the present-day natural and pre-industrial aerosols provides a direct radiative forcing estimate of −0.4±0.3Wm−2. The best estimate of globally-averaged, all-sky first indirect radiative forcing is −0.6±0.4Wm−2. Its standard deviation accounts for uncertainties in the aerosol anthropogenic fraction, and in cloud albedo and cloud droplet number concentration susceptibilities to aerosol changes. The distribution of first indirect radiative forcing is asymmetric and is bounded by −0.1 and −2.0Wm−2. In order to decrease uncertainty ranges, better observational constraints on aerosol absorption and sensitivity of cloud droplet number concentrations to aerosol changes are required.
Resumo:
In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes
Resumo:
In this paper we present the capability of a new network of field mill sensors to monitor the atmospheric electric field at various locations in South America; we also show some early results. The main objective of the new network is to obtain the characteristic Universal Time diurnal curve of the atmospheric electric field in fair weather, known as the Carnegie curve. The Carnegie curve is closely related to the current sources flowing in the Global Atmospheric Electric Circuit so that another goal is the study of this relationship on various time scales (transient/monthly/seasonal/annual). Also, by operating this new network, we may also study departures of the Carnegie curve from its long term average value related to various solar, geophysical and atmospheric phenomena such as the solar cycle, solar flares and energetic charged particles, galactic cosmic rays, seismic activity and specific meteorological events. We then expect to have a better understanding of the influence of these phenomena on the Global Atmospheric Electric Circuit and its time-varying behavior.
Resumo:
Concepts of time-dependent flow in the coupled solar wind-magnetosphere-ionosphere system are discussed and compared with the frequently-adopted steady-state paradigm. Flows are viewed as resulting from departures of the system from equilibrium excited by dayside and nightside reconnection processes, with the flows then taking the system back towards a new equilibrium configuration. The response of the system to reconnection impulses, continuous but unbalanced reconnection and balanced steady-state reconnection are discussed in these terms. It is emphasized that in the time-dependent case the ionospheric and interplanetary electric fields are generally inductively decoupled from each other; a simple mapping of the interplanetary electric field along equipotential field lines into the ionosphere occurs only in the electrostatic steady-state case.
Resumo:
Basic concepts of the form of high-latitude ionospheric flows and their excitation and decay are discussed in the light of recent high time-resolution measurements made by ground-based radars. It is first pointed out that it is in principle impossible to adequately parameterize these flows by any single quantity derived from concurrent interplanetary conditions. Rather, even at its simplest, the flow must be considered to consist of two basic time-dependent components. The first is the flow driven by magnetopause coupling processes alone, principally by dayside reconnection. These flows may indeed be reasonably parameterized in terms of concurrent near-Earth interplanetary conditions, principally by the interplanetary magnetic field (IMF) vector. The second is the flow driven by tail reconnection alone. As a first approximation these flows may also be parameterized in terms of interplanetary conditions, principally the north-south component of the IMF, but with a delay in the flow response of around 30-60 min relative to the IMF. A delay in the tail response of this order must be present due to the finite speed of information propagation in the system, and we show how "growth" and "decay" of the field and flow configuration then follow as natural consequences. To discuss the excitation and decay of the two reconnection-driven components of the flow we introduce that concept of a flow-free equilibrium configuration for a magnetosphere which contains a given (arbitrary) amount of open flux. Reconnection events act either to create or destroy open flux, thus causing departures of the system from the equilibrium configuration. Flow is then excited which moves the system back towards equilibrium with the changed amount of open flux. We estimate that the overall time scale associated with the excitation and decay of the flow is about 15 min. The response of the system to both impulsive (flux transfer event) and continuous reconnection is discussed in these terms.
Resumo:
Rates of phenotypic evolution vary widely in nature and these rates may often reflect the intensity of natural selection. Here we outline an approach for detecting exceptional shifts in the rate of phenotypic evolution across phylogenies. We introduce a simple new branch-specific metric ∆V/∆B that divides observed phenotypic change along a branch into two components: (1) that attributable to the background rate (∆B), and (2) that attributable to departures from the background rate (∆V). Where the amount of expected change derived from variation in the rate of morphological evolution doubles that explained by to the background rate (∆V/∆B > 2), we identify this as positive phenotypic selection. We apply our approach to six datasets, finding multiple instances of positive selection in each. Our results support the growing appreciation that the traditional gradual view of phenotypic evolution is rarely upheld, with a more episodic view taking its place. This moves focus away from viewing phenotypic evolution as a simple homogeneous process and facilitates reconciliation with macroevolutionary interpretations from a genetic perspective, paving the way to novel insights into the link between genotype and phenotype. The ability to detect positive selection when genetic data are unavailable or unobtainable represents an attractive prospect for extant species, but when applied to fossil data it can reveal patterns of natural selection in deep time that would otherwise be impossible.