941 resultados para Orthogonal polynomials of a discrete variable
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase.
Resumo:
This paper considers the use of a discrete-time deadbeat control action on systems affected by noise. Variations on the standard controller form are discussed and comparisons are made with controllers in which noise rejection is a higher priority objective. Both load and random disturbances are considered in the system description, although the aim of the deadbeat design remains as a tailoring of reference input variations. Finally, the use of such a deadbeat action within a self-tuning control framework is shown to satisfy, under certain conditions, the self-tuning property, generally though only when an extended form of least-squares estimation is incorporated.
Resumo:
A novel approach is presented for the evaluation of circulation type classifications (CTCs) in terms of their capability to predict surface climate variations. The approach is analogous to that for probabilistic meteorological forecasts and is based on the Brier skill score. This score is shown to take a particularly simple form in the context of CTCs and to quantify the resolution of a climate variable by the classifications. The sampling uncertainty of the skill can be estimated by means of nonparametric bootstrap resampling. The evaluation approach is applied for a systematic intercomparison of 71 CTCs (objective and manual, from COST Action 733) with respect to their ability to resolve daily precipitation in the Alpine region. For essentially all CTCs, the Brier skill score is found to be higher for weak and moderate compared to intense precipitation, for winter compared to summer, and over the north and west of the Alps compared to the south and east. Moreover, CTCs with a higher number of types exhibit better skill than CTCs with few types. Among CTCs with comparable type number, the best automatic classifications are found to outperform the best manual classifications. It is not possible to single out one ‘best’ classification for Alpine precipitation, but there is a small group showing particularly high skill.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
The interannual variability of the stratospheric polar vortex during winter in both hemispheres is observed to correlate strongly with the phase of the quasi-biennial oscillation (QBO) in tropical stratospheric winds. It follows that the lack of a spontaneously generated QBO in most atmospheric general circulation models (AGCMs) adversely affects the nature of polar variability in such models. This study examines QBO–vortex coupling in an AGCM in which a QBO is spontaneously induced by resolved and parameterized waves. The QBO–vortex coupling in the AGCM compares favorably to that seen in reanalysis data [from the 40-yr ECMWF Re-Analysis (ERA-40)], provided that careful attention is given to the definition of QBO phase. A phase angle representation of the QBO is employed that is based on the two leading empirical orthogonal functions of equatorial zonal wind vertical profiles. This yields a QBO phase that serves as a proxy for the vertical structure of equatorial winds over the whole depth of the stratosphere and thus provides a means of subsampling the data to select QBO phases with similar vertical profiles of equatorial zonal wind. Using this subsampling, it is found that the QBO phase that induces the strongest polar vortex response in early winter differs from that which induces the strongest late-winter vortex response. This is true in both hemispheres and for both the AGCM and ERA-40. It follows that the strength and timing of QBO influence on the vortex may be affected by the partial seasonal synchronization of QBO phase transitions that occurs both in observations and in the model. This provides a mechanism by which changes in the strength of QBO–vortex correlations may exhibit variability on decadal time scales. In the model, such behavior occurs in the absence of external forcings or interannual variations in sea surface temperatures.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.
Resumo:
This paper introduces a new adaptive nonlinear equalizer relying on a radial basis function (RBF) model, which is designed based on the minimum bit error rate (MBER) criterion, in the system setting of the intersymbol interference channel plus a co-channel interference. Our proposed algorithm is referred to as the on-line mixture of Gaussians estimator aided MBER (OMG-MBER) equalizer. Specifically, a mixture of Gaussians based probability density function (PDF) estimator is used to model the PDF of the decision variable, for which a novel on-line PDF update algorithm is derived to track the incoming data. With the aid of this novel on-line mixture of Gaussians based sample-by-sample updated PDF estimator, our adaptive nonlinear equalizer is capable of updating its equalizer’s parameters sample by sample to aim directly at minimizing the RBF nonlinear equalizer’s achievable bit error rate (BER). The proposed OMG-MBER equalizer significantly outperforms the existing on-line nonlinear MBER equalizer, known as the least bit error rate equalizer, in terms of both the convergence speed and the achievable BER, as is confirmed in our simulation study
Resumo:
tWe develop an orthogonal forward selection (OFS) approach to construct radial basis function (RBF)network classifiers for two-class problems. Our approach integrates several concepts in probabilisticmodelling, including cross validation, mutual information and Bayesian hyperparameter fitting. At eachstage of the OFS procedure, one model term is selected by maximising the leave-one-out mutual infor-mation (LOOMI) between the classifier’s predicted class labels and the true class labels. We derive theformula of LOOMI within the OFS framework so that the LOOMI can be evaluated efficiently for modelterm selection. Furthermore, a Bayesian procedure of hyperparameter fitting is also integrated into theeach stage of the OFS to infer the l2-norm based local regularisation parameter from the data. Since eachforward stage is effectively fitting of a one-variable model, this task is very fast. The classifier construc-tion procedure is automatically terminated without the need of using additional stopping criterion toyield very sparse RBF classifiers with excellent classification generalisation performance, which is par-ticular useful for the noisy data sets with highly overlapping class distribution. A number of benchmarkexamples are employed to demonstrate the effectiveness of our proposed approach.
Resumo:
The aim of this study was to investigate the effects of numerous milk compositional factors on milk coagulation properties using Partial Least Squares (PLS). Milk from herds of Jersey and Holstein-Friesian cattle was collected across the year and blended (n=55), to maximize variation in composition and coagulation. The milk was analysed for casein, protein, fat, titratable acidity, lactose, Ca2+, urea content, micelles size, fat globule size, somatic cell count and pH. Milk coagulation properties were defined as coagulation time, curd firmness and curd firmness rate measured by a controlled strain rheometer. The models derived from PLS had higher predictive power than previous models demonstrating the value of measuring more milk components. In addition to the well-established relationships with casein and protein levels, CMS and fat globule size were found to have as strong impact on all of the three models. The study also found a positive impact of fat on milk coagulation properties and a strong relationship between lactose and curd firmness, and urea and curd firmness rate, all of which warrant further investigation due to current lack of knowledge of the underlying mechanism. These findings demonstrate the importance of using a wider range of milk compositional variable for the prediction of the milk coagulation properties, and hence as indicators of milk suitability for cheese making.
Resumo:
We consider the two-dimensional Helmholtz equation with constant coefficients on a domain with piecewise analytic boundary, modelling the scattering of acoustic waves at a sound-soft obstacle. Our discretisation relies on the Trefftz-discontinuous Galerkin approach with plane wave basis functions on meshes with very general element shapes, geometrically graded towards domain corners. We prove exponential convergence of the discrete solution in terms of number of unknowns.
Resumo:
In this work we construct reliable a posteriori estimates for some semi- (spatially) discrete discontinuous Galerkin schemes applied to nonlinear systems of hyperbolic conservation laws. We make use of appropriate reconstructions of the discrete solution together with the relative entropy stability framework, which leads to error control in the case of smooth solutions. The methodology we use is quite general and allows for a posteriori control of discontinuous Galerkin schemes with standard flux choices which appear in the approximation of conservation laws. In addition to the analysis, we conduct some numerical benchmarking to test the robustness of the resultant estimator.