913 resultados para least-squares
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
Numerical weather prediction (NWP) centres use numerical models of the atmospheric flow to forecast future weather states from an estimate of the current state. Variational data assimilation (VAR) is used commonly to determine an optimal state estimate that miminizes the errors between observations of the dynamical system and model predictions of the flow. The rate of convergence of the VAR scheme and the sensitivity of the solution to errors in the data are dependent on the condition number of the Hessian of the variational least-squares objective function. The traditional formulation of VAR is ill-conditioned and hence leads to slow convergence and an inaccurate solution. In practice, operational NWP centres precondition the system via a control variable transform to reduce the condition number of the Hessian. In this paper we investigate the conditioning of VAR for a single, periodic, spatially-distributed state variable. We present theoretical bounds on the condition number of the original and preconditioned Hessians and hence demonstrate the improvement produced by the preconditioning. We also investigate theoretically the effect of observation position and error variance on the preconditioned system and show that the problem becomes more ill-conditioned with increasingly dense and accurate observations. Finally, we confirm the theoretical results in an operational setting by giving experimental results from the Met Office variational system.
Resumo:
Recursive Learning Control (RLC) has the potential to significantly reduce the tracking error in many repetitive trajectory applications. This paper presents an application of RLC to a soil testing load frame where non-adaptive techniques struggle with the highly nonlinear nature of soil. The main purpose of the controller is to apply a sinusoidal force reference trajectory on a soil sample with a high degree of accuracy and repeatability. The controller uses a feedforward control structure, recursive least squares adaptation algorithm and RLC to compensate for periodic errors. Tracking error is reduced and stability is maintained across various soil sample responses.
Resumo:
We develop a complex-valued (CV) B-spline neural network approach for efficient identification and inversion of CV Wiener systems. The CV nonlinear static function in the Wiener system is represented using the tensor product of two univariate B-spline neural networks. With the aid of a least squares parameter initialisation, the Gauss-Newton algorithm effectively estimates the model parameters that include the CV linear dynamic model coefficients and B-spline neural network weights. The identification algorithm naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. An accurate inverse of the CV Wiener system is then obtained, in which the inverse of the CV nonlinear static function of the Wiener system is calculated efficiently using the Gaussian-Newton algorithm based on the estimated B-spline neural network model, with the aid of the De Boor recursions. The effectiveness of our approach for identification and inversion of CV Wiener systems is demonstrated using the application of digital predistorter design for high power amplifiers with memory
Resumo:
This study examines differences in net selling price for residential real estate across male and female agents. A sample of 2,020 home sales transactions from Fulton County, Georgia are analyzed in a two-stage least squares, geospatial autoregressive corrected, semi-log hedonic model to test for gender and gender selection effects. Although agent gender seems to play a role in naïve models, its role becomes inconclusive as variables controlling for possible price and time on market expectations of the buyers and sellers are introduced to the models. Clear differences in real estate sales prices, time on market, and agent incomes across genders are unlikely due to differences in negotiation performance between genders or the mix of genders in a two-agent negotiation. The evidence suggests an interesting alternative to agent performance: that buyers and sellers with different reservation price and time on market expectations, such as those selling foreclosure homes, tend to select agents along gender lines.
Resumo:
The concentrations of dissolved noble gases in water are widely used as a climate proxy to determine noble gas temperatures (NGTs); i.e., the temperature of the water when gas exchange last occurred. In this paper we make a step forward to apply this principle to fluid inclusions in stalagmites in order to reconstruct the cave temperature prevailing at the time when the inclusion was formed. We present an analytical protocol that allows us accurately to determine noble gas concentrations and isotope ratios in stalagmites, and which includes a precise manometrical determination of the mass of water liberated from fluid inclusions. Most important for NGT determination is to reduce the amount of noble gases liberated from air inclusions, as they mask the temperature-dependent noble gas signal from the water inclusions. We demonstrate that offline pre-crushing in air to subsequently extract noble gases and water from the samples by heating is appropriate to separate gases released from air and water inclusions. Although a large fraction of recent samples analysed by this technique yields NGTs close to present-day cave temperatures, the interpretation of measured noble gas concentrations in terms of NGTs is not yet feasible using the available least squares fitting models. This is because the noble gas concentrations in stalagmites are not only composed of the two components air and air saturated water (ASW), which these models are able to account for. The observed enrichments in heavy noble gases are interpreted as being due to adsorption during sample preparation in air, whereas the excess in He and Ne is interpreted as an additional noble gas component that is bound in voids in the crystallographic structure of the calcite crystals. As a consequence of our study's findings, NGTs will have to be determined in the future using the concentrations of Ar, Kr and Xe only. This needs to be achieved by further optimizing the sample preparation to minimize atmospheric contamination and to further reduce the amount of noble gases released from air inclusions.
Resumo:
Real estate depreciation continues to be a critical issue for investors and the appraisal profession in the UK in the 1990s. Depreciation-sensitive cash flow models have been developed, but there is a real need to develop further empirical methodologies to determine rental depreciation rates for input into these models. Although building quality has been found to be an important explanatory variable in depreciation it is very difficult to incorporate it into such models or to analyse it retrospectively. It is essential to examine previous depreciation research from real estate and economics in the USA and UK to understand the issues in constructing a valid and pragmatic way of calculating rental depreciation. Distinguishing between 'depreciation' and 'obsolescence' is important, and the pattern of depreciation in any study can be influenced by such factors as the type (longitudinal or crosssectional) and timing of the study, and the market state. Longitudinal studies can analyse change more directly than cross-sectional studies. Any methodology for calculating rental depreciation rate should be formulated in the context of such issues as 'censored sample bias', 'lemons' and 'filtering', which have been highlighted in key US literature from the field of economic depreciation. Property depreciation studies in the UK have tended to overlook this literature, however. Although data limitations and constraints reduce the ability of empirical property depreciation work in the UK to consider these issues fully, 'averaging' techniques and ordinary least squares (OLS) regression can both provide a consistent way of calculating rental depreciation rates within a 'cohort' framework.
Resumo:
This research examines the influence of environmental institutional distance between home and host countries on the standardization of environmental performance among multinational enterprises using ordinary least-squares (OLS) regression techniques and a sample of 128 multinationals from high-polluting industries. The paper examines the environmental institutional distance of countries using the concepts of formal and informal institutional distances. The results show that whereas a high formal environmental distance between home and host countries leads multinational enterprises to achieve a different level of environmental performance according to each country's legal requirements, a high informal environmental distance encourages these firms to unify their environmental performance independently of the countries in which their units are based. The study also discusses the implications for academia, managers, and policy makers.
Resumo:
Details are given of a boundary-fitted mesh generation method for use in modelling free surface flow and water quality. A numerical method has been developed for generating conformal meshes for curvilinear polygonal and multiply-connected regions. The method is based on the Cauchy-Riemann conditions for the analytic function and is able to map a curvilinear polygonal region directly onto a regular polygonal region, with horizontal and vertical sides. A set of equations have been derived for determining the lengths of these sides and the least-squares method has been used in solving the equations. Several numerical examples are presented to illustrate the method.
Resumo:
In wireless communication systems, all in-phase and quadrature-phase (I/Q) signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of multiple-input multiple-output (MIMO) maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. In order to perform the MIMO MRC, we propose a channel estimation algorithm that accounts for the I/Q imbalance. Moreover, a compensation algorithm for the I/Q imbalance in MIMO MRC systems is proposed, which first employs the least-squares (LS) rule to estimate the coefficients of the channel gain matrix, beamforming and combining weight vectors, and parameters of I/Q imbalance jointly, and then makes use of the received signal together with its conjugation to detect the transmitted signal. The performance of the MIMO MRC system under study is evaluated in terms of average symbol error probability (SEP), outage probability and ergodic capacity, which are derived considering transmission over Rayleigh fading channels. Numerical results are provided and show that the proposed compensation algorithm can efficiently mitigate the effect of I/Q imbalance.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.
Resumo:
The application of metabolomics in multi-centre studies is increasing. The aim of the present study was to assess the effects of geographical location on the metabolic profiles of individuals with the metabolic syndrome. Blood and urine samples were collected from 219 adults from seven European centres participating in the LIPGENE project (Diet, genomics and the metabolic syndrome: an integrated nutrition, agro-food, social and economic analysis). Nutrient intakes, BMI, waist:hip ratio, blood pressure, and plasma glucose, insulin and blood lipid levels were assessed. Plasma fatty acid levels and urine were assessed using a metabolomic technique. The separation of three European geographical groups (NW, northwest; NE, northeast; SW, southwest) was identified using partial least-squares discriminant analysis models for urine (R 2 X: 0•33, Q 2: 0•39) and plasma fatty acid (R 2 X: 0•32, Q 2: 0•60) data. The NW group was characterised by higher levels of urinary hippurate and N-methylnicotinate. The NE group was characterised by higher levels of urinary creatine and citrate and plasma EPA (20 : 5 n-3). The SW group was characterised by higher levels of urinary trimethylamine oxide and lower levels of plasma EPA. The indicators of metabolic health appeared to be consistent across the groups. The SW group had higher intakes of total fat and MUFA compared with both the NW and NE groups (P≤ 0•001). The NE group had higher intakes of fibre and n-3 and n-6 fatty acids compared with both the NW and SW groups (all P< 0•001). It is likely that differences in dietary intakes contributed to the separation of the three groups. Evaluation of geographical factors including diet should be considered in the interpretation of metabolomic data from multi-centre studies.