109 resultados para Minimum Mean Square Error of Intensity Distribution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have shown previously that particpants “at risk” of depression have decreased neural processing of reward suggesting this might be a neural biomarker for depression. However, how the neural signal related to subjective experiences of reward (wanting, liking, intensity) might differ as trait markers for depression, is as yet unknown. Using SPM8 parametric modulation analysis the neural signal related to the subjective report of wanting, liking and intensity was compared between 25 young people with a biological parent with depression (FH) and 25 age/gender matched controls. In a second study the neural signal related to the subjective report of wanting, liking and intensity was compared between 13 unmedicated recovered depressed (RD) patients and 14 healthy age/gender matched controls. The analysis revealed differences in the neural signal for wanting, liking and intensity ratings in the ventral striatum, dmPFC and caudate respectively in the RD group compared to controls . Despite no differences in the FH groups neural signal for wanting and liking there was a difference in the neural signal for intensity ratings in the dACC and anterior insula compared to controls. These results suggest that the neural substrates tracking the intensity but not the wanting or liking for rewards and punishers might be a trait marker for depression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To test the effectiveness of stochastic single-chain models in describing the dynamics of entangled polymers, we systematically compare one such model; the slip-spring model; to a multichain model solved using stochastic molecular dynamics(MD) simulations (the Kremer-Grest model). The comparison involves investigating if the single-chain model can adequately describe both a microscopic dynamical and a macroscopic rheological quantity for a range of chain lengths. Choosing a particular chain length in the slip-spring model, the parameter values that best reproduce the mean-square displacement of a group of monomers is determined by fitting toMDdata. Using the same set of parameters we then test if the predictions of the mean-square displacements for other chain lengths agree with the MD calculations. We followed this by a comparison of the time dependent stress relaxation moduli obtained from the two models for a range of chain lengths. After identifying a limitation of the original slip-spring model in describing the static structure of the polymer chain as seen in MD, we remedy this by introducing a pairwise repulsive potential between the monomers in the chains. Poor agreement of the mean-square monomer displacements at short times can be rectified by the use of generalized Langevin equations for the dynamics and resulted in significantly improved agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the orientational ordering on the surface of a sphere using Monte Carlo and Brownian dynamics simulations of rods interacting with an anisotropic potential. We restrict the orientations to the local tangent plane of the spherical surface and fix the position of each rod to be at a discrete point on the spherical surface. On the surface of a sphere, orientational ordering cannot be perfectly nematic due to the inevitable presence of defects. We find that the ground state of four +1/2 point defects is stable across a broad range of temperatures. We investigate the transition from disordered to ordered phase by decreasing the temperature and find a very smooth transition. We use fluctuations of the local directors to estimate the Frank elastic constant on the surface of a sphere and compare it to the planar case. We observe subdiffusive behavior in the mean square displacement of the defect cores and estimate their diffusion constants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the crosswind (wind component perpendicular to a path, U⊥) is measured by a scintillometer and estimated with Doppler lidar above the urban environment of Helsinki, Finland, for 15 days. The scintillometer allows acquisition of a path-averaged value of U⊥ (U⊥), while the lidar allows acquisition of path-resolved U⊥ (U⊥ (x), where x is the position along the path). The goal of this study is to evaluate the performance of scintillometer U⊥ estimates for conditions under which U⊥ (x) is variable. Two methods are applied to estimate U⊥ from the scintillometer signal: the cumulative-spectrum method (relies on scintillation spectra) and the look-up-table method (relies on time-lagged correlation functions). The values of U⊥ of both methods compare well with the lidar estimates, with root-mean-square deviations of 0.71 and 0.73 m s−1. This indicates that, given the data treatment applied in this study, both measurement technologies are able to obtain estimates of U⊥ in the complex urban environment. The detailed investigation of four cases indicates that the cumulative-spectrum method is less susceptible to a variable U⊥ (x) than the look-up-table method. However, the look-up-table method can be adjusted to improve its capabilities for estimating U⊥ under conditions under for which U⊥ (x) is variable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models, and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas, and performing a pseudo-monochromatic radiation calculation for each point. In this paper it is first argued that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer pseudo-monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K d−1 due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K d−1 can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K d−1 for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A regional study of the prediction of extratropical cyclones by the European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) has been performed. An objective feature-tracking method has been used to identify and track the cyclones along the forecast trajectories. Forecast error statistics have then been produced for the position, intensity, and propagation speed of the storms. In previous work, data limitations meant it was only possible to present the diagnostics for the entire Northern Hemisphere (NH) or Southern Hemisphere. A larger data sample has allowed the diagnostics to be computed separately for smaller regions around the globe and has made it possible to explore the regional differences in the prediction of storms by the EPS. Results show that in the NH there is a larger ensemble mean error in the position of storms over the Atlantic Ocean. Further analysis revealed that this is mainly due to errors in the prediction of storm propagation speed rather than in direction. Forecast storms propagate too slowly in all regions, but the bias is about 2 times as large in the NH Atlantic region. The results show that storm intensity is generally overpredicted over the ocean and underpredicted over the land and that the absolute error in intensity is larger over the ocean than over the land. In the NH, large errors occur in the prediction of the intensity of storms that originate as tropical cyclones but then move into the extratropics. The ensemble is underdispersive for the intensity of cyclones (i.e., the spread is smaller than the mean error) in all regions. The spatial patterns of the ensemble mean error and ensemble spread are very different for the intensity of cyclones. Spatial distributions of the ensemble mean error suggest that large errors occur during the growth phase of storm development, but this is not indicated by the spatial distributions of the ensemble spread. In the NH there are further differences. First, the large errors in the prediction of the intensity of cyclones that originate in the tropics are not indicated by the spread. Second, the ensemble mean error is larger over the Pacific Ocean than over the Atlantic, whereas the opposite is true for the spread. The use of a storm-tracking approach, to both weather forecasters and developers of forecast systems, is also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding links between the El Nino-Southern Oscillation (ENSO) and snow would be useful for seasonal forecasting, but also for understanding natural variability and interpreting climate change predictions. Here, a 545-year run of the general circulation model HadCM3, with prescribed external forcings and fixed greenhouse gas concentrations, is used to explore the impact of ENSO on snow water equivalent (SWE) anomalies. In North America, positive ENSO events reduce the mean SWE and skew the distribution towards lower values, and vice versa during negative ENSO events. This is associated with a dipole SWE anomaly structure, with anomalies of opposite sign centered in western Canada and the central United States. In Eurasia, warm episodes lead to a more positively skewed distribution and the mean SWE is raised. Again, the opposite effect is seen during cold episodes. In Eurasia the largest anomalies are concentrated in the Himalayas. These correlations with February SWE distribution are seen to exist from the previous June-July-August (JJA) ENSO index onwards, and are weakly detected in 50-year subsections of the control run, but only a shifted North American response can be detected in the anaylsis of 40 years of ERA40 reanalysis data. The ENSO signal in SWE from the long run could still contribute to regional predictions although it would be a weak indicator only

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ozone and its precursors were measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the monsoon season 2006 as part of the African Monsoon Multidisciplinary Analysis (AMMA) campaign. One of the main features observed in the west African boundary layer is the increase of the ozone mixing ratios from 25 ppbv over the forested area (south of 12° N) up to 40 ppbv over the Sahelian area. We employ a two-dimensional (latitudinal versus vertical) meteorological model coupled with an O3-NOx-VOC chemistry scheme to simulate the distribution of trace gases over West Africa during the monsoon season and to analyse the processes involved in the establishment of such a gradient. Including an additional source of NO over the Sahelian region to account for NO emitted by soils we simulate a mean NOx concentration of 0.7 ppbv at 16° N versus 0.3 ppbv over the vegetated region further south in reasonable agreement with the observations. As a consequence, ozone is photochemically produced with a rate of 0.25 ppbv h−1 over the vegetated region whilst it reaches up to 0.75 ppbv h−1 at 16° N. We find that the modelled gradient is due to a combination of enhanced deposition to vegetation, which decreases the ozone levels by up to 11 pbbv, and the aforementioned enhanced photochemical production north of 12° N. The peroxy radicals required for this enhanced production in the north come from the oxidation of background CO and CH4 as well as from VOCs. Sensitivity studies reveal that both the background CH4 and partially oxidised VOCs, produced from the oxidation of isoprene emitted from the vegetation in the south, contribute around 5–6 ppbv to the ozone gradient. These results suggest that the northward transport of trace gases by the monsoon flux, especially during nighttime, can have a significant, though secondary, role in determining the ozone gradient in the boundary layer. Convection, anthropogenic emissions and NO produced from lightning do not contribute to the establishment of the discussed ozone gradient.