950 resultados para root mean square
Resumo:
Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.
Resumo:
High resolution surface wind fields covering the global ocean, estimated from remotely sensed wind data and ECMWF wind analyses, have been available since 2005 with a spatial resolution of 0.25 degrees in longitude and latitude, and a temporal resolution of 6h. Their quality is investigated through various comparisons with surface wind vectors from 190 buoys moored in various oceanic basins, from research vessels and from QuikSCAT scatterometer data taken during 2005-2006. The NCEP/NCAR and NCDC blended wind products are also considered. The comparisons performed during January-December 2005 show that speeds and directions compare well to in-situ observations, including from moored buoys and ships, as well as to the remotely sensed data. The root-mean-squared differences of the wind speed and direction for the new blended wind data are lower than 2m/s and 30 degrees, respectively. These values are similar to those estimated in the comparisons of hourly buoy measurements and QuickSCAT near real time retrievals. At global scale, it is found that the new products compare well with the wind speed and wind vector components observed by QuikSCAT. No significant dependencies on the QuikSCAT wind speed or on the oceanic region considered are evident.Evaluation of high-resolution surface wind products at global and regional scales
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
We study the orientational ordering on the surface of a sphere using Monte Carlo and Brownian dynamics simulations of rods interacting with an anisotropic potential. We restrict the orientations to the local tangent plane of the spherical surface and fix the position of each rod to be at a discrete point on the spherical surface. On the surface of a sphere, orientational ordering cannot be perfectly nematic due to the inevitable presence of defects. We find that the ground state of four +1/2 point defects is stable across a broad range of temperatures. We investigate the transition from disordered to ordered phase by decreasing the temperature and find a very smooth transition. We use fluctuations of the local directors to estimate the Frank elastic constant on the surface of a sphere and compare it to the planar case. We observe subdiffusive behavior in the mean square displacement of the defect cores and estimate their diffusion constants.
Resumo:
In this paper we investigate the equilibrium properties of magnetic dipolar (ferro-) fluids and discuss finite-size effects originating from the use of different boundary conditions in computer simulations. Both periodic boundary conditions and a finite spherical box are studied. We demonstrate that periodic boundary conditions and subsequent use of Ewald sum to account for the long-range dipolar interactions lead to a much faster convergence (in terms of the number of investigated dipolar particles) of the magnetization curve and the initial susceptibility to their thermodynamic limits. Another unwanted effect of the simulations in a finite spherical box geometry is a considerable sensitivity to the container size. We further investigate the influence of the surface term in the Ewald sum-that is, due to the surrounding continuum with magnetic permeability mu(BC)-on the convergence properties of our observables and on the final results. The two different ways of evaluating the initial susceptibility, i.e., (1) by the magnetization response of the system to an applied field and (2) by the zero-field fluctuation of the mean-square dipole moment of the system, are compared in terms of speed and accuracy.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. Such a timing mismatch may cause rank deficiency of the conventional space-time codes and, thus, performance degradation. One efficient way to overcome such an issue is the delay-tolerant space-time codes (DT-STCs). The existing DT-STCs are designed assuming that the transmitter has no knowledge about the channels. In this paper, we show how the performance of DT-STCs can be improved by utilizing some feedback information. A general framework for designing DT-STC with limited feedback is first proposed, allowing for flexible system parameters such as the number of transmit/receive antennas, modulated symbols, and the length of codewords. Then, a new design method is proposed by combining Lloyd's algorithm and the stochastic gradient-descent algorithm to obtain optimal codebook of STCs, particularly for systems with linear minimum-mean-square-error receiver. Finally, simulation results confirm the performance of the newly designed DT-STCs with limited feedback.
Resumo:
To mitigate the inter-carrier interference (ICI) of doubly-selective (DS) fading channels, we consider a hybrid carrier modulation (HCM) system employing the discrete partial fast Fourier transform (DPFFT) demodulation and the banded minimum mean square error (MMSE) equalization in this letter. We first provide the discrete form of partial FFT demodulation, then apply the banded MMSE equalization to suppress the residual interference at the receiver. The proposed algorithm has been demonstrated, via numerical simulations, to be its superior over the single carrier modulation (SCM) system and circularly prefixed orthogonal frequency division multiplexing (OFDM) system over a typical DS channel. Moreover, it represents a good trade-off between computational complexity and performance.
Resumo:
We apply the Coexistence Approach (CoA) to reconstruct mean annual precipitation (MAP), mean annual temperature (MAT), mean temperature of thewarmestmonth (MTWA) and mean temperature of the coldest month (MTCO) at 44 pollen sites on the Qinghai–Tibetan Plateau. The modern climate ranges of the taxa are obtained (1) from county-level presence/absence data and (2) from data on the optimum and range of each taxon from Lu et al. (2011). The CoA based on the optimumand range data yields better predictions of observed climate parameters at the pollen sites than that based on the county-level data. The presence of arboreal pollen, most of which is derived fromoutside the region, distorts the reconstructions. More reliable reconstructions are obtained using only the non-arboreal component of the pollen assemblages. The root mean-squared error (RMSE) of the MAP reconstructions are smaller than the RMSE of MAT, MTWA and MTCO, suggesting that precipitation gradients are the most important control of vegetation distribution on the Qinghai–Tibetan Plateau. Our results show that CoA could be used to reconstruct past climates in this region, although in areas characterized by open vegetation the most reliable estimates will be obtained by excluding possible arboreal contaminants.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
The scalar form factor describes modifications induced by the pion over the quark condensate. Assuming that representations produced by chiral perturbation theory can be pushed to high values of negative-t, a region in configuration space is reached (r < R similar to 0.5 fm) where the form factor changes sign, indicating that the condensate has turned into empty space. A simple model for the pion incorporates this feature into density functions. When supplemented by scalar-meson excitations, it yields predictions close to empirical values for the mean square radius (< r(2)>(pi)(S) = 0.59 fm(2)) and for one of the low energy constants ((l) over bar (4) = 4.3), with no adjusted parameters.
Resumo:
Objectives: The aim of this study was to evaluate the effects of tamoxifen on the weight and thickness of the urethral epithelium of castrated female rats. Methods: Forty castrated adult female Wistar-Hannover rats were randomly divided into two groups: Group I (n = 20) in which the animals received only the vehicle (propylene glycol) and Group 11 (n = 20) in which the rats received tamoxifen 250 mu g/day by gavage. After 30 days of treatment, all animals were sacrificed and the urethra was immediately removed for weighing. Next, the urethra was divided into the proximal and distal segments, which were fixed in 10% formaldehyde and submitted to routine histological techniques for morphometric study. The data were analyzed using the weighted minimum mean-square error method and Student`s t-test for two independent samples (p < 0.05). Results: There was a significant increase in the mean weight of the urethra in the rats of Group 11 compared to the control group, 32.0 +/- 2.0 mg and 22.0 +/- 1.6 mg, respectively (p < 0.001). The mean thickness of the distal urethral epithelium of the animals treated with tamoxifen was significantly greater than that of the control group, 42.8 +/- 2.0 mu m and 36.6 +/- 1.5 mu m, respectively (p < 0.001). There was no statistically significant difference between the two groups with respect to the epithelial thickness of the proximal urethra (p = 0.514). Conclusion: Treating castrated adult rats with 250 mu g/day of tamoxifen for 30 days may increase the weight of the urethra and the thickness of the distal urethral epithelium. (c) 2008 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches. The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.