887 resultados para Sums of squares


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recursive Learning Control (RLC) has the potential to significantly reduce the tracking error in many repetitive trajectory applications. This paper presents an application of RLC to a soil testing load frame where non-adaptive techniques struggle with the highly nonlinear nature of soil. The main purpose of the controller is to apply a sinusoidal force reference trajectory on a soil sample with a high degree of accuracy and repeatability. The controller uses a feedforward control structure, recursive least squares adaptation algorithm and RLC to compensate for periodic errors. Tracking error is reduced and stability is maintained across various soil sample responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concentrations of dissolved noble gases in water are widely used as a climate proxy to determine noble gas temperatures (NGTs); i.e., the temperature of the water when gas exchange last occurred. In this paper we make a step forward to apply this principle to fluid inclusions in stalagmites in order to reconstruct the cave temperature prevailing at the time when the inclusion was formed. We present an analytical protocol that allows us accurately to determine noble gas concentrations and isotope ratios in stalagmites, and which includes a precise manometrical determination of the mass of water liberated from fluid inclusions. Most important for NGT determination is to reduce the amount of noble gases liberated from air inclusions, as they mask the temperature-dependent noble gas signal from the water inclusions. We demonstrate that offline pre-crushing in air to subsequently extract noble gases and water from the samples by heating is appropriate to separate gases released from air and water inclusions. Although a large fraction of recent samples analysed by this technique yields NGTs close to present-day cave temperatures, the interpretation of measured noble gas concentrations in terms of NGTs is not yet feasible using the available least squares fitting models. This is because the noble gas concentrations in stalagmites are not only composed of the two components air and air saturated water (ASW), which these models are able to account for. The observed enrichments in heavy noble gases are interpreted as being due to adsorption during sample preparation in air, whereas the excess in He and Ne is interpreted as an additional noble gas component that is bound in voids in the crystallographic structure of the calcite crystals. As a consequence of our study's findings, NGTs will have to be determined in the future using the concentrations of Ar, Kr and Xe only. This needs to be achieved by further optimizing the sample preparation to minimize atmospheric contamination and to further reduce the amount of noble gases released from air inclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real estate depreciation continues to be a critical issue for investors and the appraisal profession in the UK in the 1990s. Depreciation-sensitive cash flow models have been developed, but there is a real need to develop further empirical methodologies to determine rental depreciation rates for input into these models. Although building quality has been found to be an important explanatory variable in depreciation it is very difficult to incorporate it into such models or to analyse it retrospectively. It is essential to examine previous depreciation research from real estate and economics in the USA and UK to understand the issues in constructing a valid and pragmatic way of calculating rental depreciation. Distinguishing between 'depreciation' and 'obsolescence' is important, and the pattern of depreciation in any study can be influenced by such factors as the type (longitudinal or crosssectional) and timing of the study, and the market state. Longitudinal studies can analyse change more directly than cross-sectional studies. Any methodology for calculating rental depreciation rate should be formulated in the context of such issues as 'censored sample bias', 'lemons' and 'filtering', which have been highlighted in key US literature from the field of economic depreciation. Property depreciation studies in the UK have tended to overlook this literature, however. Although data limitations and constraints reduce the ability of empirical property depreciation work in the UK to consider these issues fully, 'averaging' techniques and ordinary least squares (OLS) regression can both provide a consistent way of calculating rental depreciation rates within a 'cohort' framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research examines the influence of environmental institutional distance between home and host countries on the standardization of environmental performance among multinational enterprises using ordinary least-squares (OLS) regression techniques and a sample of 128 multinationals from high-polluting industries. The paper examines the environmental institutional distance of countries using the concepts of formal and informal institutional distances. The results show that whereas a high formal environmental distance between home and host countries leads multinational enterprises to achieve a different level of environmental performance according to each country's legal requirements, a high informal environmental distance encourages these firms to unify their environmental performance independently of the countries in which their units are based. The study also discusses the implications for academia, managers, and policy makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this paper (Ulbrich et al. 2003), we gave a description of the August 2002 rainfall events and the resultant floods, in particular of the flood wave of the River Elbe. The extreme precipitation sums observed in the first half of the month were primarily associated with two rainfall episodes. The first episode occurred on 6/7 August 2002. The main rainfall area was situated over Lower Austria, the south-western part of the Czech Republic and south-eastern Germany. A severe flash flood was produced in the Lower Austrian Waldviertel (`forest quarter’ ). The second episode on 11± 13 August 2002 most severely affected the Erz Mountains and western parts of the Czech Republic. During this second episode 312mm of rain was recorded between 0600GMT on 12 August and 0600GMT on 13 August at the Zinnwald weather station in the ErzMountains, which is a new 24-hour record for Germany. The flash floods resulting from this rainfall episode and the subsequent Elbe flood produced the most expensive weatherrelated catastrophe in Europe in recent decades. In this part of the paper we discuss the meteorological conditions and physical mechanisms leading to the two main events. Similarities to the conditions that led to the recent summer floods of the River Oder in 1997 and the River Vistula in 2001 will be shown. This will lead us to a consideration of trends in extreme rainfall over Europe which are found in numerical simulations of anthropogenic climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We obtain sharp estimates for multidimensional generalisations of Vinogradov’s mean value theorem for arbitrary translation-dilation invariant systems, achieving constraints on the number of variables approaching those conjectured to be the best possible. Several applications of our bounds are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In wireless communication systems, all in-phase and quadrature-phase (I/Q) signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of multiple-input multiple-output (MIMO) maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. In order to perform the MIMO MRC, we propose a channel estimation algorithm that accounts for the I/Q imbalance. Moreover, a compensation algorithm for the I/Q imbalance in MIMO MRC systems is proposed, which first employs the least-squares (LS) rule to estimate the coefficients of the channel gain matrix, beamforming and combining weight vectors, and parameters of I/Q imbalance jointly, and then makes use of the received signal together with its conjugation to detect the transmitted signal. The performance of the MIMO MRC system under study is evaluated in terms of average symbol error probability (SEP), outage probability and ergodic capacity, which are derived considering transmission over Rayleigh fading channels. Numerical results are provided and show that the proposed compensation algorithm can efficiently mitigate the effect of I/Q imbalance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of metabolomics in multi-centre studies is increasing. The aim of the present study was to assess the effects of geographical location on the metabolic profiles of individuals with the metabolic syndrome. Blood and urine samples were collected from 219 adults from seven European centres participating in the LIPGENE project (Diet, genomics and the metabolic syndrome: an integrated nutrition, agro-food, social and economic analysis). Nutrient intakes, BMI, waist:hip ratio, blood pressure, and plasma glucose, insulin and blood lipid levels were assessed. Plasma fatty acid levels and urine were assessed using a metabolomic technique. The separation of three European geographical groups (NW, northwest; NE, northeast; SW, southwest) was identified using partial least-squares discriminant analysis models for urine (R 2 X: 0•33, Q 2: 0•39) and plasma fatty acid (R 2 X: 0•32, Q 2: 0•60) data. The NW group was characterised by higher levels of urinary hippurate and N-methylnicotinate. The NE group was characterised by higher levels of urinary creatine and citrate and plasma EPA (20 : 5 n-3). The SW group was characterised by higher levels of urinary trimethylamine oxide and lower levels of plasma EPA. The indicators of metabolic health appeared to be consistent across the groups. The SW group had higher intakes of total fat and MUFA compared with both the NW and NE groups (P≤ 0•001). The NE group had higher intakes of fibre and n-3 and n-6 fatty acids compared with both the NW and SW groups (all P< 0•001). It is likely that differences in dietary intakes contributed to the separation of the three groups. Evaluation of geographical factors including diet should be considered in the interpretation of metabolomic data from multi-centre studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective was to measure effects of 3-nitrooxypropanol (3NP) on methane production of lactating dairy cows and any associated changes in digestion and energy and nitrogen metabolism. Six Holstein-Friesian dairy cows in mid-lactation were fed twice daily a total mixed ration with maize silage as the primary forage source. Cows received 1 of 3 treatments using an experimental design based on two 3 × 3 Latin squares with 5-wk periods. Treatments were a control placebo or 500 or 2,500 mg/d of 3NP delivered directly into the rumen, via the rumen fistula, in equal doses before each feeding. Measurements of methane production and energy and nitrogen balance were obtained during wk 5 of each period using respiration calorimeters and digestion trials. Measurements of rumen pH (48 h) and postprandial volatile fatty acid and ammonia concentrations were made at the end of wk 4. Daily methane production was reduced by 3NP, but the effects were not dose dependent (reductions of 6.6 and 9.8% for 500 and 2,500 mg/d, respectively). Dosing 3NP had a transitory inhibitory effect on methane production, which may have been due to the product leaving the rumen in liquid outflow or through absorption or metabolism. Changes in rumen concentrations of volatile fatty acids indicated that the pattern of rumen fermentation was affected by both doses of the product, with a decrease in acetate:propionate ratio observed, but that acetate production was inhibited by the higher dose. Dry matter, organic matter, acid detergent fiber, N, and energy digestibility were reduced at the higher dose of the product. The decrease in digestible energy supply was not completely countered by the decrease in methane excretion such that metabolizable energy supply, metabolizable energy concentration of the diet, and net energy balance (milk plus tissue energy) were reduced by the highest dose of 3NP. Similarly, the decrease in nitrogen digestibility at the higher dose of the product was associated with a decrease in body nitrogen balance that was not observed for the lower dose. Milk yield and milk fat concentration and fatty acid composition were not affected but milk protein concentration was greater for the higher dose of 3NP. Twice-daily rumen dosing of 3NP reduced methane production by lactating dairy cows, but the dose of 2,500 mg/d reduced rumen acetate concentration, diet digestibility, and energy supply. Further research is warranted to determine the optimal dose and delivery method of the product. Key words: 3-nitrooxypropanol, methane, digestion, rumen, dairy cow

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radar refractivity retrievals have the potential to accurately capture near-surface humidity fields from the phase change of ground clutter returns. In practice, phase changes are very noisy and the required smoothing will diminish large radial phase change gradients, leading to severe underestimates of large refractivity changes (ΔN). To mitigate this, the mean refractivity change over the field (ΔNfield) must be subtracted prior to smoothing. However, both observations and simulations indicate that highly correlated returns (e.g., when single targets straddle neighboring gates) result in underestimates of ΔNfield when pulse-pair processing is used. This may contribute to reported differences of up to 30 N units between surface observations and retrievals. This effect can be avoided if ΔNfield is estimated using a linear least squares fit to azimuthally averaged phase changes. Nevertheless, subsequent smoothing of the phase changes will still tend to diminish the all-important spatial perturbations in retrieved refractivity relative to ΔNfield; an iterative estimation approach may be required. The uncertainty in the target location within the range gate leads to additional phase noise proportional to ΔN, pulse length, and radar frequency. The use of short pulse lengths is recommended, not only to reduce this noise but to increase both the maximum detectable refractivity change and the number of suitable targets. Retrievals of refractivity fields must allow for large ΔN relative to an earlier reference field. This should be achievable for short pulses at S band, but phase noise due to target motion may prevent this at C band, while at X band even the retrieval of ΔN over shorter periods may at times be impossible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied the degradation of sebaceous fingerprints on brass surfaces using silver electroless deposition (SED) as a visualization technique. We have stored fingerprints on brass squares either (i) in a locked dark cupboard or (ii) in glass-filtered natural daylight for periods of 3 h, 24 h, 1 week, 3 weeks, and 6 weeks. We find that fingerprints on brass surfaces degrade much more rapidly when kept in the light than they do under dark conditions with a much higher proportion of high-quality prints found after 3 or 6 weeks of aging when stored in the dark. This process is more marked than for similar fingerprints on black PVC surfaces. Identifiable prints can be achieved on brass surfaces using both SED and cyanoacrylate fuming (CFM). SED is quick and straightforward to perform. CFM is more time-consuming but is versatile and can be applied to a wider range of metal surfaces than SED, for example brass surfaces which have been coated by a lacquer.