22 resultados para Data Accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study of Oceans dynamics and forecast is crucial as it influences the regional climate and other marine activities. Forecasting oceanographic states like sea surface currents, Sea surface temperature (SST) and mixed layer depth at different time scales is extremely important for these activities. These forecasts are generated by various ocean general circulation models (OGCM). One such model is the Regional Ocean Modelling System (ROMS). Though ROMS can simulate several features of ocean, it cannot reproduce the thermocline of the ocean properly. Solution to this problem is to incorporates data assimilation (DA) in the model. DA system using Ensemble Transform Kalman Filter (ETKF) has been developed for ROMS model to improve the accuracy of the model forecast. To assimilate data temperature and salinity from ARGO data has been used as observation. Assimilated temperature and salinity without localization shows oscillations compared to the model run without assimilation for India Ocean. Same was also found for u and v-velocity fields. With localization we found that the state variables are diverging within the localization scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective air flow distribution through perforated tiles is required to efficiently cool servers in a raised floor data center. We present detailed computational fluid dynamics (CFD) modeling of air flow through a perforated tile and its entrance to the adjacent server rack. The realistic geometrical details of the perforated tile, as well as of the rack are included in the model. Generally, models for air flow through perforated tiles specify a step pressure loss across the tile surface, or porous jump model based on the tile porosity. An improvement to this includes a momentum source specification above the tile to simulate the acceleration of the air flow through the pores, or body force model. In both of these models, geometrical details of tile such as pore locations and shapes are not included. More details increase the grid size as well as the computational time. However, the grid refinement can be controlled to achieve balance between the accuracy and computational time. We compared the results from CFD using geometrical resolution with the porous jump and body force model solution as well as with the measured flow field using particle image velocimetry (PIV) experiments. We observe that including tile geometrical details gives better results as compared to elimination of tile geometrical details and specifying physical models across and above the tile surface. A modification to the body force model is also suggested and improved results were achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop iterative diffraction tomography algorithms, which are similar to the distorted Born algorithms, for inverting scattered intensity data. Within the Born approximation, the unknown scattered field is expressed as a multiplicative perturbation to the incident field. With this, the forward equation becomes stable, which helps us compute nearly oscillation-free solutions that have immediate bearing on the accuracy of the Jacobian computed for use in a deterministic Gauss-Newton (GN) reconstruction. However, since the data are inherently noisy and the sensitivity of measurement to refractive index away from the detectors is poor, we report a derivative-free evolutionary stochastic scheme, providing strictly additive updates in order to bridge the measurement-prediction misfit, to arrive at the refractive index distribution from intensity transport data. The superiority of the stochastic algorithm over the GN scheme for similar settings is demonstrated by the reconstruction of the refractive index profile from simulated and experimentally acquired intensity data. (C) 2014 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale estimates of the area of terrestrial surface waters have greatly improved over time, in particular through the development of multi-satellite methodologies, but the generally coarse spatial resolution (tens of kms) of global observations is still inadequate for many ecological applications. The goal of this study is to introduce a new, globally applicable downscaling method and to demonstrate its applicability to derive fine resolution results from coarse global inundation estimates. The downscaling procedure predicts the location of surface water cover with an inundation probability map that was generated by bagged derision trees using globally available topographic and hydrographic information from the SRTM-derived HydroSHEDS database and trained on the wetland extent of the GLC2000 global land cover map. We applied the downscaling technique to the Global Inundation Extent from Multi-Satellites (GIEMS) dataset to produce a new high-resolution inundation map at a pixel size of 15 arc-seconds, termed GIEMS-D15. GIEMS-D15 represents three states of land surface inundation extents: mean annual minimum (total area, 6.5 x 10(6) km(2)), mean annual maximum (12.1 x 10(6) km(2)), and long-term maximum (173 x 10(6) km(2)); the latter depicts the largest surface water area of any global map to date. While the accuracy of GIEMS-D15 reflects distribution errors introduced by the downscaling process as well as errors from the original satellite estimates, overall accuracy is good yet spatially variable. A comparison against regional wetland cover maps generated by independent observations shows that the results adequately represent large floodplains and wetlands. GIEMS-D15 offers a higher resolution delineation of inundated areas than previously available for the assessment of global freshwater resources and the study of large floodplain and wetland ecosystems. The technique of applying inundation probabilities also allows for coupling with coarse-scale hydro-climatological model simulations. (C) 2014 Elsevier Inc All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The disclosure of information and its misuse in Privacy Preserving Data Mining (PPDM) systems is a concern to the parties involved. In PPDM systems data is available amongst multiple parties collaborating to achieve cumulative mining accuracy. The vertically partitioned data available with the parties involved cannot provide accurate mining results when compared to the collaborative mining results. To overcome the privacy issue in data disclosure this paper describes a Key Distribution-Less Privacy Preserving Data Mining (KDLPPDM) system in which the publication of local association rules generated by the parties is published. The association rules are securely combined to form the combined rule set using the Commutative RSA algorithm. The combined rule sets established are used to classify or mine the data. The results discussed in this paper compare the accuracy of the rules generated using the C4. 5 based KDLPPDM system and the CS. 0 based KDLPPDM system using receiver operating characteristics curves (ROC).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action recognition plays an important role in various applications, including smart homes and personal assistive robotics. In this paper, we propose an algorithm for recognizing human actions using motion capture action data. Motion capture data provides accurate three dimensional positions of joints which constitute the human skeleton. We model the movement of the skeletal joints temporally in order to classify the action. The skeleton in each frame of an action sequence is represented as a 129 dimensional vector, of which each component is a 31) angle made by each joint with a fixed point on the skeleton. Finally, the video is represented as a histogram over a codebook obtained from all action sequences. Along with this, the temporal variance of the skeletal joints is used as additional feature. The actions are classified using Meta-Cognitive Radial Basis Function Network (McRBFN) and its Projection Based Learning (PBL) algorithm. We achieve over 97% recognition accuracy on the widely used Berkeley Multimodal Human Action Database (MHAD).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since streaming data keeps coming continuously as an ordered sequence, massive amounts of data is created. A big challenge in handling data streams is the limitation of time and space. Prototype selection on streaming data requires the prototypes to be updated in an incremental manner as new data comes in. We propose an incremental algorithm for prototype selection. This algorithm can also be used to handle very large datasets. Results have been presented on a number of large datasets and our method is compared to an existing algorithm for streaming data. Our algorithm saves time and the prototypes selected gives good classification accuracy.