894 resultados para Validation model
Resumo:
Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
Denial-of-service (DoS) attacks form a very important category of security threats that are prevalent in MIPv6 (mobile internet protocol version 6) today. Many schemes have been proposed to alleviate such threats, including one of our own [9]. However, reasoning about the correctness of such protocols is not trivial. In addition, new solutions to mitigate attacks may need to be deployed in the network on a frequent basis as and when attacks are detected, as it is practically impossible to anticipate all attacks and provide solutions in advance. This makes it necessary to validate the solutions in a timely manner before deployment in the real network. However, threshold schemes needed in group protocols make analysis complex. Model checking threshold-based group protocols that employ cryptography have not been successful so far. Here, we propose a new simulation based approach for validation using a tool called FRAMOGR that supports executable specification of group protocols that use cryptography. FRAMOGR allows one to specify attackers and track probability distributions of values or paths. We believe that infrastructure such as FRAMOGR would be required in future for validating new group based threshold protocols that may be needed for making MIPv6 more robust.
Resumo:
This research is designed to develop a new technique for site characterization in a three-dimensional domain. Site characterization is a fundamental task in geotechnical engineering practice, as well as a very challenging process, with the ultimate goal of estimating soil properties based on limited tests at any half-space subsurface point in a site.In this research, the sandy site at the Texas A&M University's National Geotechnical Experimentation Site is selected as an example to develop the new technique for site characterization, which is based on Artificial Neural Networks (ANN) technology. In this study, a sequential approach is used to demonstrate the applicability of ANN to site characterization. To verify its robustness, the proposed new technique is compared with other commonly used approaches for site characterization. In addition, an artificial site is created, wherein soil property values at any half-space point are assumed, and thus the predicted values can compare directly with their corresponding actual values, as a means of validation. Since the three-dimensional model has the capability of estimating the soil property at any location in a site, it could have many potential applications, especially in such case, wherein the soil properties within a zone are of interest rather than at a single point. Examples of soil properties of zonal interest include soil type classification and liquefaction potential evaluation. In this regard, the present study also addresses this type of applications based on a site located in Taiwan, which experienced liquefaction during the 1999 Chi-Chi, Taiwan, Earthquake.
Resumo:
This paper is focused on the development of a model for predicting the mean drop size in effervescent sprays. A combinatorial approach is followed in this modeling scheme, which is based on energy and entropy principles. The model is implemented in cascade in order to take primary breakup (due to exploding gas bubbles) and secondary breakup (due to shearing action of surrounding medium) into account. The approach in this methodology is to obtain the most probable drop size distribution by maximizing the entropy while satisfying the constraints of mass and energy balance. The comparison of the model predictions with the past experimental data is presented for validation. A careful experimental study is conducted over a wide range of gas-to-liquid ratios, which shows a good agreement with the model predictions: It is observed that the model gives accurate results in bubbly and annular flow regimes. However, discrepancies are observed in the transitional slug flow regime of the atomizer.
Resumo:
In the present study, a new turbulent premixed combustion model is proposed by integrating the Coherent Flame Model with the modified eddy dissipation concept, and relating the fine structure mass fraction to the flame surface density. First, experimental results of turbulent flame speed available from literature are compared with the predicted results at different turbulence intensities to validate the flame surface density model. It is observed that the model is able to predict the turbulent burning speeds accurately. Then, a comprehensive validation is carried out utilizing data on a turbulent lifted methane flame issuing into a vitiated co-flow. Detailed comparison of temperature and species concentrations between experiment and simulation is performed at different heights of the flame. Overall, the model is found to predict both the spatial variation and peak values of the scalars at various heights satisfactorily.
Resumo:
We have developed a graphical user interface based dendrimer builder toolkit (DBT) which can be used to generate the dendrimer configuration of desired generation for various dendrimer architectures. The validation of structures generated by this tool was carried out by studying the structural properties of two well known classes of dendrimers: ethylenediamine cored poly(amidoamine) (PAMAM) dendrimer, diaminobutyl cored poly(propylene imine) (PPI) dendrimer. Using full atomistic molecular dynamics (MD) simulation we have calculated the radius of gyration, shape tensor and monomer density distribution for PAMAM and PPI dendrimer at neutral and high pH. A good agreement between the available simulation and experimental (small angle X-ray and neutron scattering; SAXS, SANS) results and calculated radius of gyration was observed. With this validation we have used DBT to build another new class of nitrogen cored poly(propyl ether imine) dendrimer and study it's structural features using all atomistic MD simulation. DBT is a versatile tool and can be easily used to generate other dendrimer structures with different chemistry and topology. The use of general amber force field to describe the intra-molecular interactions allows us to integrate this tool easily with the widely used molecular dynamics software AMBER. This makes our tool a very useful utility which can help to facilitate the study of dendrimer interaction with nucleic acids, protein and lipid bilayer for various biological applications. © 2012 Wiley Periodicals, Inc.
Resumo:
We have developed a graphical user interface based dendrimer builder toolkit (DBT) which can be used to generate the dendrimer configuration of desired generation for various dendrimer architectures. The validation of structures generated by this tool was carried out by studying the structural properties of two well known classes of dendrimers: ethylenediamine cored poly(amidoamine) (PAMAM) dendrimer, diaminobutyl cored poly(propylene imine) (PPI) dendrimer. Using full atomistic molecular dynamics (MD) simulation we have calculated the radius of gyration, shape tensor and monomer density distribution for PAMAM and PPI dendrimer at neutral and high pH. A good agreement between the available simulation and experimental (small angle X-ray and neutron scattering; SAXS, SANS) results and calculated radius of gyration was observed. With this validation we have used DBT to build another new class of nitrogen cored poly(propyl ether imine) dendrimer and study it's structural features using all atomistic MD simulation. DBT is a versatile tool and can be easily used to generate other dendrimer structures with different chemistry and topology. The use of general amber force field to describe the intra-molecular interactions allows us to integrate this tool easily with the widely used molecular dynamics software AMBER. This makes our tool a very useful utility which can help to facilitate the study of dendrimer interaction with nucleic acids, protein and lipid bilayer for various biological applications. (c) 2012 Wiley Periodicals, Inc.
Resumo:
Experimental and numerical studies of slurry generation using a cooling slope are presented in the paper. The slope having stainless steel body has been designed and constructed to produce semisolid A356 Al alloy slurry. The pouring temperature of molten metal, slope angle of the cooling slope and slope wall temperature were varied during the experiment. A multiphase numerical model, considering liquid metal and air, has been developed to simulate the liquid metal flow along the cooling channel using an Eulerian two-phase flow approach. Solid fraction evolution of the solidifying melt is tracked at different locations of the cooling channel following Schiel's equation. The continuity, momentum and energy equations are solved considering thin wall boundary condition approach. During solidification of the melt, based on the liquid fraction and latent heat of the alloy, temperature of the alloy is modified continuously by introducing a modified temperature recovery method. Numerical simulations has been carried out for semisolid slurry formation by varying the process parameters such as angle of the cooling slope, cooling slope wall temperature and melt superheat temperature, to understand the effect of process variables on cooling slope semisolid slurry generation process such as temperature distribution, velocity distribution and solid fraction of the solidifying melt. Experimental validation performed for some chosen cases reveals good agreement with the numerical simulations.
Resumo:
The analytical solutions for the coupled diffusion equations that are encountered in diffuse fluorescence spectroscopy/ imaging for regular geometries were compared with the well-established numerical models, which are based on the finite element method. Comparison among the analytical solutions obtained using zero boundary conditions and extrapolated boundary conditions (EBCs) was also performed. The results reveal that the analytical solutions are in close agreement with the numerical solutions, and solutions obtained using EBCs are more accurate in obtaining the mean time of flight data compared to their counterpart. The analytical solutions were also shown to be capable of providing bulk optical properties through a numerical experiment using a realistic breast model. (C) 2013 Optical Society of America
Resumo:
This paper proposes a sparse modeling approach to solve ordinal regression problems using Gaussian processes (GP). Designing a sparse GP model is important from training time and inference time viewpoints. We first propose a variant of the Gaussian process ordinal regression (GPOR) approach, leave-one-out GPOR (LOO-GPOR). It performs model selection using the leave-one-out cross-validation (LOO-CV) technique. We then provide an approach to design a sparse model for GPOR. The sparse GPOR model reduces computational time and storage requirements. Further, it provides faster inference. We compare the proposed approaches with the state-of-the-art GPOR approach on some benchmark data sets. Experimental results show that the proposed approaches are competitive.
Resumo:
[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.
Resumo:
In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.
Resumo:
Molten A356 aluminum alloy flowing on an oblique plate is water cooled from underneath. The melt partially solidifies on plate wall with continuous formation of columnar dendrites. These dendrites are continuously sheared off into equiaxed/fragmented grains and carried away with the melt by producing semisolid slurry collected at plate exit. Melt pouring temperature provides required solidification whereas plate inclination enables necessary shear for producing slurry of desired solid fraction. A numerical model concerning transport equations of mass, momentum, energy and species is developed for predicting velocity, temperature, macrosegregation and solid fraction. The model uses FVM with phase change algorithm, VOF and variable viscosity. The model introduces solid phase movement with gravity effect as well. Effects of melt pouring temperature and plate inclination on hydrodynamic and thermo-solutal behaviors are studied subsequently. Slurry solid fractions at plate exit are 27%, 22%, 16%, and 10% for pouring temperatures of 620 degrees C, 625 degrees C, 630 degrees C, and 635 degrees C, respectively. And, are 27%, 25%, 22%, and 18% for plate inclinations of 30, 45, 60, and 75, respectively. Melt pouring temperature of 625 degrees C with plate inclination of 60 generates appropriate quality of slurry and is the optimum. Both numerical and experimental results are in good agreement with each other. (C) 2015 Taiwan Institute of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
The current study presents an algorithm to retrieve surface Soil Moisture (SM) from multi-temporal Synthetic Aperture Radar (SAR) data. The developed algorithm is based on the Cumulative Density Function (CDF) transformation of multi-temporal RADARSAT-2 backscatter coefficient (BC) to obtain relative SM values, and then converts relative SM values into absolute SM values using soil information. The algorithm is tested in a semi-arid tropical region in South India using 30 satellite images of RADARSAT-2, SMOS L2 SM products, and 1262 SM field measurements in 50 plots spanning over 4 years. The validation with the field data showed the ability of the developed algorithm to retrieve SM with RMSE ranging from 0.02 to 0.06 m(3)/m(3) for the majority of plots. Comparison with the SMOS SM showed a good temporal behaviour with RMSE of approximately 0.05 m(3)/m(3) and a correlation coefficient of approximately 0.9. The developed model is compared and found to be better than the change detection and delta index model. The approach does not require calibration of any parameter to obtain relative SM and hence can easily be extended to any region having time series of SAR data available.