874 resultados para Time-varying covariance matrices
Resumo:
We present observations of a transient event in the dayside auroral ionosphere at magnetic noon. F-region plasma convection measurements were made by the EISCAT radar, operating in the beamswinging “Polar” experiment mode, and simultaneous observations of the dayside auroral emissions were made by optical meridian-scanning photometers and all-sky TV cameras at Ny Ålesund, Spitzbergen. The data were recorded on 9 January 1989, and a sequence of bursts of flow, with associated transient aurora, were observed between 08:45 and 11:00 U.T. In this paper we concentrate on an event around 09:05 U.T. because that is very close to local magnetic noon. The optical data show a transient intensification and widening (in latitude) of the cusp/cleft region, as seen in red line auroral emissions. Over an interval of about 10 min, the band of 630 nm aurora widened from about 1.5° of invariant latitude to over 5° and returned to its original width. Embedded within the widening band of 630 nm emissions were two intense, active 557.7 nm arc fragments with rays which persisted for about 2 min each. The flow data before and after the optical transient show eastward flows, with speeds increasing markedly with latitude across the band of 630 nm aurora. Strong, apparently westward, flows appeared inside the band while it was widening, but these rotated round to eastward, through northward, as the band shrunk to its original width. The observed ion temperatures verify that the flow speeds during the transient were, to a large extent, as derived using the beamswinging technique; but they also show that the flow increase initially occurred in the western azimuth only. This spatial gradient in the flow introduces ambiguity in the direction of these initial flows and they could have been north-eastward rather than westward. However, the westward direction derived by the beamswinging is consistent with the motion of the colocated and coincident active 557.7 nm arc fragment, A more stable transient 557.7 nm aurora was found close to the shear between the inferred westward flows and the persisting eastward flows to the North. Throughout the transient, northward flow was observed across the equatorward boundary of the 630 nm aurora. Interpretation of the data is made difficult by lack of IMF data, problems in distinguishing the cusp and cleft aurora and uncertainty over which field lines are open and which are closed. However, at magnetic noon there is a 50% probability that we were observing the cusp, in which case from its southerly location we infer that the IMF was southward and many features are suggestive of time-varying reconnection at a single X-line on the dayside magnetopause. This IMF orientation is also consistent with the polar rain precipitation observed simultaneously by the DMSP-F9 satellite in the southern polar cap. There is also a 25% chance that we were observing the cleft (or the mantle poleward of the cleft). In this case we infer that the IMF was northward and the transient is well explained by reconnection which is not only transient in time but occurs at various sites located randomly on the dayside magnetopause (i.e. patchy in space). Lastly, there is a 25% chance that we were observing the cusp poleward of the cleft, in which case we infer that IMF Bz was near zero and the transient is explained by a mixture of the previous two interpretations.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.
Resumo:
High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-toaverage power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA’s nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme.
Resumo:
Implicit dynamic-algebraic equations, known in control theory as descriptor systems, arise naturally in many applications. Such systems may not be regular (often referred to as singular). In that case the equations may not have unique solutions for consistent initial conditions and arbitrary inputs and the system may not be controllable or observable. Many control systems can be regularized by proportional and/or derivative feedback.We present an overview of mathematical theory and numerical techniques for regularizing descriptor systems using feedback controls. The aim is to provide stable numerical techniques for analyzing and constructing regular control and state estimation systems and for ensuring that these systems are robust. State and output feedback designs for regularizing linear time-invariant systems are described, including methods for disturbance decoupling and mixed output problems. Extensions of these techniques to time-varying linear and nonlinear systems are discussed in the final section.
Resumo:
The Clouds, Aerosol, and Precipitation in the Marine Boundary Layer (CAP-MBL) deployment at Graciosa Island in the Azores generated a 21-month (April 2009–December 2010) comprehensive dataset documenting clouds, aerosols, and precipitation using the Atmospheric Radiation Measurement Program (ARM) Mobile Facility (AMF). The scientific aim of the deployment is to gain improved understanding of the interactions of clouds, aerosols, and precipitation in the marine boundary layer. Graciosa Island straddles the boundary between the subtropics and midlatitudes in the northeast Atlantic Ocean and consequently experiences a great diversity of meteorological and cloudiness conditions. Low clouds are the dominant cloud type, with stratocumulus and cumulus occurring regularly. Approximately half of all clouds contained precipitation detectable as radar echoes below the cloud base. Radar and satellite observations show that clouds with tops from 1 to 11 km contribute more or less equally to surface-measured precipitation at Graciosa. A wide range of aerosol conditions was sampled during the deployment consistent with the diversity of sources as indicated by back-trajectory analysis. Preliminary findings suggest important two-way interactions between aerosols and clouds at Graciosa, with aerosols affecting light precipitation and cloud radiative properties while being controlled in part by precipitation scavenging. The data from Graciosa are being compared with short-range forecasts made with a variety of models. A pilot analysis with two climate and two weather forecast models shows that they reproduce the observed time-varying vertical structure of lower-tropospheric cloud fairly well but the cloud-nucleating aerosol concentrations less well. The Graciosa site has been chosen to be a permanent fixed ARM site that became operational in October 2013.
Resumo:
Simultaneous all angle collocations (SAACs) of microwave humidity sounders (AMSU-B and MHS) on-board polar orbiting satellites are used to estimate scan-dependent biases. This method has distinct advantages over previous methods, such as that the estimated scan-dependent biases are not influenced by diurnal differences between the edges of the scan and the biases can be estimated for both sides of the scan. We find the results are robust in the sense that biases estimated for one satellite pair can be reproduced by double differencing biases of these satellites with a third satellite. Channel 1 of these instruments shows the least bias for all satellites. Channel 2 has biases greater than 5 K, thus needs to be corrected. Channel 3 has biases of about 2 K and more and they are time varying for some of the satellites. Channel 4 has the largest bias which is about 15 K when the data are averaged for 5 years, but biases of individual months can be as large as 30 K. Channel 5 also has large and time varying biases for two of the AMSU-Bs. NOAA-15 (N15) channels are found to be affected the most, mainly due to radio frequency interference (RFI) from onboard data transmitters. Channel 4 of N15 shows the largest and time varying biases, so data of this channel should only be used with caution for climate applications. The two MHS instruments show the best agreement for all channels. Our estimates may be used to correct for scan-dependent biases of these instruments, or at least used as a guideline for excluding channels with large scan asymmetries from scientific analyses.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.
Resumo:
The destructive environmental and socio-economic impacts of the El Niño/Southern Oscillation1, 2 (ENSO) demand an improved understanding of how ENSO will change under future greenhouse warming. Robust projected changes in certain aspects of ENSO have been recently established3, 4, 5. However, there is as yet no consensus on the change in the magnitude of the associated sea surface temperature (SST) variability6, 7, 8, commonly used to represent ENSO amplitude1, 6, despite its strong effects on marine ecosystems and rainfall worldwide1, 2, 3, 4, 9. Here we show that the response of ENSO SST amplitude is time-varying, with an increasing trend in ENSO amplitude before 2040, followed by a decreasing trend thereafter. We attribute the previous lack of consensus to an expectation that the trend in ENSO amplitude over the entire twenty-first century is unidirectional, and to unrealistic model dynamics of tropical Pacific SST variability. We examine these complex processes across 22 models in the Coupled Model Intercomparison Project phase 5 (CMIP5) database10, forced under historical and greenhouse warming conditions. The nine most realistic models identified show a strong consensus on the time-varying response and reveal that the non-unidirectional behaviour is linked to a longitudinal difference in the surface warming rate across the Indo-Pacific basin. Our results carry important implications for climate projections and climate adaptation pathways.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
Morphological integration refers to the modular structuring of inter-trait relationships in an organism, which could bias the direction and rate of morphological change, either constraining or facilitating evolution along certain dimensions of the morphospace. Therefore, the description of patterns and magnitudes of morphological integration and the analysis of their evolutionary consequences are central to understand the evolution of complex traits. Here we analyze morphological integration in the skull of several mammalian orders, addressing the following questions: are there common patterns of inter-trait relationships? Are these patterns compatible with hypotheses based on shared development and function? Do morphological integration patterns and magnitudes vary in the same way across groups? We digitized more than 3,500 specimens spanning 15 mammalian orders, estimated the correspondent pooled within-group correlation and variance/covariance matrices for 35 skull traits and compared those matrices among the orders. We also compared observed patterns of integration to theoretical expectations based on common development and function. Our results point to a largely shared pattern of inter-trait correlations, implying that mammalian skull diversity has been produced upon a common covariance structure that remained similar for at least 65 million years. Comparisons with a rodent genetic variance/covariance matrix suggest that this broad similarity extends also to the genetic factors underlying phenotypic variation. In contrast to the relative constancy of inter-trait correlation/covariance patterns, magnitudes varied markedly across groups. Several morphological modules hypothesized from shared development and function were detected in the mammalian taxa studied. Our data provide evidence that mammalian skull evolution can be viewed as a history of inter-module parcellation, with the modules themselves being more clearly marked in those lineages with lower overall magnitude of integration. The implication of these findings is that the main evolutionary trend in the mammalian skull was one of decreasing the constraints to evolution by promoting a more modular architecture.
Resumo:
Changes in patterns and magnitudes of integration may influence the ability of a species to respond to selection. Consequently, modularity has often been linked to the concept of evolvability, but their relationship has rarely been tested empirically. One possible explanation is the lack of analytical tools to compare patterns and magnitudes of integration among diverse groups that explicitly relate these aspects to the quantitative genetics framework. We apply such framework here using the multivariate response to selection equation to simulate the evolutionary behavior of several mammalian orders in terms of their flexibility, evolvability and constraints in the skull. We interpreted these simulation results in light of the integration patterns and magnitudes of the same mammalian groups, described in a companion paper. We found that larger magnitudes of integration were associated with a blur of the modules in the skull and to larger portions of the total variation explained by size variation, which in turn can exert a strong evolutionary constraint, thus decreasing the evolutionary flexibility. Conversely, lower overall magnitudes of integration were associated with distinct modules in the skull, to smaller fraction of the total variation associated with size and, consequently, to weaker constraints and more evolutionary flexibility. Flexibility and constraints are, therefore, two sides of the same coin and we found them to be quite variable among mammals. Neither the overall magnitude of morphological integration, the modularity itself, nor its consequences in terms of constraints and flexibility, were associated with absolute size of the organisms, but were strongly associated with the proportion of the total variation in skull morphology captured by size. Therefore, the history of the mammalian skull is marked by a trade-off between modularity and evolvability. Our data provide evidence that, despite the stasis in integration patterns, the plasticity in the magnitude of integration in the skull had important consequences in terms of evolutionary flexibility of the mammalian lineages.
Resumo:
In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved
Resumo:
For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.