998 resultados para synthetic estimation
Resumo:
Background: Advances in nutritional assessment are continuing to embrace developments in computer technology. The online Food4Me food frequency questionnaire (FFQ) was created as an electronic system for the collection of nutrient intake data. To ensure its accuracy in assessing both nutrient and food group intake, further validation against data obtained using a reliable, but independent, instrument and assessment of its reproducibility are required. Objective: The aim was to assess the reproducibility and validity of the Food4Me FFQ against a 4-day weighed food record (WFR). Methods: Reproducibility of the Food4Me FFQ was assessed using test-retest methodology by asking participants to complete the FFQ on 2 occasions 4 weeks apart. To assess the validity of the Food4Me FFQ against the 4-day WFR, half the participants were also asked to complete a 4-day WFR 1 week after the first administration of the Food4Me FFQ. Level of agreement between nutrient and food group intakes estimated by the repeated Food4Me FFQ and the Food4Me FFQ and 4-day WFR were evaluated using Bland-Altman methodology and classification into quartiles of daily intake. Crude unadjusted correlation coefficients were also calculated for nutrient and food group intakes. Results: In total, 100 people participated in the assessment of reproducibility (mean age 32, SD 12 years), and 49 of these (mean age 27, SD 8 years) also took part in the assessment of validity. Crude unadjusted correlations for repeated Food4Me FFQ ranged from .65 (vitamin D) to .90 (alcohol). The mean cross-classification into “exact agreement plus adjacent” was 92% for both nutrient and food group intakes, and Bland-Altman plots showed good agreement for energy-adjusted macronutrient intakes. Agreement between the Food4Me FFQ and 4-day WFR varied, with crude unadjusted correlations ranging from .23 (vitamin D) to .65 (protein, % total energy) for nutrient intakes and .11 (soups, sauces and miscellaneous foods) to .73 (yogurts) for food group intake. The mean cross-classification into “exact agreement plus adjacent” was 80% and 78% for nutrient and food group intake, respectively. There were no significant differences between energy intakes estimated using the Food4Me FFQ and 4-day WFR, and Bland-Altman plots showed good agreement for both energy and energy-controlled nutrient intakes. Conclusions: The results demonstrate that the online Food4Me FFQ is reproducible for assessing nutrient and food group intake and has moderate agreement with the 4-day WFR for assessing energy and energy-adjusted nutrient intakes. The Food4Me FFQ is a suitable online tool for assessing dietary intake in healthy adults.
Resumo:
This paper presents results of the AQL2004 project, which has been develope within the GOFC-GOLD Latin American network of remote sensing and forest fires (RedLatif). The project intended to obtain monthly burned-land maps of the entire region, from Mexico to Patagonia, using MODIS (moderate-resolution imaging spectroradiometer) reflectance data. The project has been organized in three different phases: acquisition and preprocessing of satellite data; discrimination of burned pixels; and validation of results. In the first phase, input data consisting of 32-day composites of MODIS 500-m reflectance data generated by the Global Land Cover Facility (GLCF) of the University of Maryland (College Park, Maryland, U.S.A.) were collected and processed. The discrimination of burned areas was addressed in two steps: searching for "burned core" pixels using postfire spectral indices and multitemporal change detection and mapping of burned scars using contextual techniques. The validation phase was based on visual analysis of Landsat and CBERS (China-Brazil Earth Resources Satellite) images. Validation of the burned-land category showed an agreement ranging from 30% to 60%, depending on the ecosystem and vegetation species present. The total burned area for the entire year was estimated to be 153 215 km2. The most affected countries in relation to their territory were Cuba, Colombia, Bolivia, and Venezuela. Burned areas were found in most land covers; herbaceous vegetation (savannas and grasslands) presented the highest proportions of burned area, while perennial forest had the lowest proportions. The importance of croplands in the total burned area should be taken with reserve, since this cover presented the highest commission errors. The importance of generating systematic products of burned land areas for different ecological processes is emphasized.
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
A procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) is proposed for operational rainfall estimation using rain gauges and radar data. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on Barnes' objective analysis scheme (OAS), whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, a spatially variable adjustment with multiplicative factors, and ordinary cokriging.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
The purity and structural stability of the high thermoelectric performance Cu12Sb4S13 and Cu10.4Ni1.6Sb4S13 tetrahedrite phases, synthesized by solid–liquid–vapor reaction and Spark Plasma Sintering, were studied at high temperature by Rietveld refinement using high resolution X-ray powder diffraction data, DSC/TG measurements and high resolution transmission electron microscopy. In a complementary study, the crystal structure of Cu10.5Ni1.5Sb4S13 as a function of temperature was investigated by powder neutron diffraction. The temperature dependence of the structural stability of ternary Cu12Sb4S13 is markedly different to that of the nickel-substituted phases, providing clear evidence for the significant and beneficial role of nickel substitution on both sample purity and stability of the tetrahedrite phase. Moreover, kinetic effects on the phase stability/decomposition have been identified and discussed in order to determine the maximum operating temperature for thermoelectric applications. The thermoelectric properties of these compounds have been determined for high density samples (>98%) prepared by Spark Plasma Sintering and therefore can be used as reference values for tetrahedrite samples. The maximum ZT of 0.8 was found for Cu10.4Ni1.6Sb4S13 at 700 K.
Resumo:
During the development of new therapies, it is not uncommon to test whether a new treatment works better than the existing treatment for all patients who suffer from a condition (full population) or for a subset of the full population (subpopulation). One approach that may be used for this objective is to have two separate trials, where in the first trial, data are collected to determine if the new treatment benefits the full population or the subpopulation. The second trial is a confirmatory trial to test the new treatment in the population selected in the first trial. In this paper, we consider the more efficient two-stage adaptive seamless designs (ASDs), where in stage 1, data are collected to select the population to test in stage 2. In stage 2, additional data are collected to perform confirmatory analysis for the selected population. Unlike the approach that uses two separate trials, for ASDs, stage 1 data are also used in the confirmatory analysis. Although ASDs are efficient, using stage 1 data both for selection and confirmatory analysis introduces selection bias and consequently statistical challenges in making inference. We will focus on point estimation for such trials. In this paper, we describe the extent of bias for estimators that ignore multiple hypotheses and selecting the population that is most likely to give positive trial results based on observed stage 1 data. We then derive conditionally unbiased estimators and examine their mean squared errors for different scenarios.
Resumo:
We develop a method to derive aerosol properties over land surfaces using combined spectral and angular information, such as available from ESA Sentinel-3 mission, to be launched in 2015. A method of estimating aerosol optical depth (AOD) using only angular retrieval has previously been demonstrated on data from the ENVISAT and PROBA-1 satellite instruments, and is extended here to the synergistic spectral and angular sampling of Sentinel-3. The method aims to improve the estimation of AOD, and to explore the estimation of fine mode fraction (FMF) and single scattering albedo (SSA) over land surfaces by inversion of a coupled surface/atmosphere radiative transfer model. The surface model includes a general physical model of angular and spectral surface reflectance. An iterative process is used to determine the optimum value of the aerosol properties providing the best fit of the corrected reflectance values to the physical model. The method is tested using hyperspectral, multi-angle Compact High Resolution Imaging Spectrometer (CHRIS) images. The values obtained from these CHRIS observations are validated using ground-based sun photometer measurements. Results from 22 image sets using the synergistic retrieval and improved aerosol models show an RMSE of 0.06 in AOD, reduced to 0.03 over vegetated targets.
Resumo:
Active remote sensing of marine boundary-layer clouds is challenging as drizzle drops often dominate the observed radar reflectivity. We present a new method to simultaneously retrieve cloud and drizzle vertical profiles in drizzling boundary-layer clouds using surface-based observations of radar reflectivity, lidar attenuated backscatter, and zenith radiances under conditions when precipitation does not reach the surface. Specifically, the vertical structure of droplet size and water content of both cloud and drizzle is characterised throughout the cloud. An ensemble optimal estimation approach provides full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from large-eddy simulation snapshots of cumulus under stratocumulus, where cloud water path is retrieved with an error of 31 g m−2 . The method also performs well in non-drizzling clouds where no assumption of the cloud profile is required. We then apply the method to observations of marine stratocumulus obtained during the Atmospheric Radiation Measurement MAGIC deployment in the Northeast Pacific. Here, retrieved cloud water path agrees well with independent three-channel microwave radiometer retrievals, with a root mean square difference of 10–20 g m−2.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.
Resumo:
Partial budgeting was used to estimate the net benefit of blending Jersey milk in Holstein-Friesian milk for Cheddar cheese production. Jersey milk increases Cheddar cheese yield. However, the cost of Jersey milk is also higher; thus, determining the balance of profitability is necessary, including consideration of seasonal effects. Input variables were based on a pilot plant experiment run from 2012 to 2013 and industry milk and cheese prices during this period. When Jersey milk was used at an increasing rate with Holstein-Friesian milk (25, 50, 75, and 100% Jersey milk), it resulted in an increase of average net profit of 3.41, 6.44, 8.57, and 11.18 pence per kilogram of milk, respectively, and this additional profit was constant throughout the year. Sensitivity analysis showed that the most influential input on additional profit was cheese yield, whereas cheese price and milk price had a small effect. The minimum increase in yield, which was necessary for the use of Jersey milk to be profitable, was 2.63, 7.28, 9.95, and 12.37% at 25, 50, 75, and 100% Jersey milk, respectively. Including Jersey milk did not affect the quantity of whey butter and powder produced. Althoug further research is needed to ascertain the amount of additional profit that would be found on a commercial scale, the results indicate that using Jersey milk for Cheddar cheese making would lead to an improvement in profit for the cheese makers, especially at higher inclusion rates.