18 resultados para validation study

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Molten A356 aluminum alloy flowing on an oblique plate is water cooled from underneath. The melt partially solidifies on plate wall with continuous formation of columnar dendrites. These dendrites are continuously sheared off into equiaxed/fragmented grains and carried away with the melt by producing semisolid slurry collected at plate exit. Melt pouring temperature provides required solidification whereas plate inclination enables necessary shear for producing slurry of desired solid fraction. A numerical model concerning transport equations of mass, momentum, energy and species is developed for predicting velocity, temperature, macrosegregation and solid fraction. The model uses FVM with phase change algorithm, VOF and variable viscosity. The model introduces solid phase movement with gravity effect as well. Effects of melt pouring temperature and plate inclination on hydrodynamic and thermo-solutal behaviors are studied subsequently. Slurry solid fractions at plate exit are 27%, 22%, 16%, and 10% for pouring temperatures of 620 degrees C, 625 degrees C, 630 degrees C, and 635 degrees C, respectively. And, are 27%, 25%, 22%, and 18% for plate inclinations of 30, 45, 60, and 75, respectively. Melt pouring temperature of 625 degrees C with plate inclination of 60 generates appropriate quality of slurry and is the optimum. Both numerical and experimental results are in good agreement with each other. (C) 2015 Taiwan Institute of Chemical Engineers. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In bovines characterization of biochemical and molecular determinants of the dominant follicle before and during different time intervals after gonadotrophin surge requires precise identification of the dominant follicle from a follicular wave. The objectives of the present study were to standardize an experimental model in buffalo cows for accurately identifying the dominant follicle of the first wave of follicular growth and characterize changes in follicular fluid hormone concentrations as well as expression patterns of various genes associated with the process of ovulation. From the day of estrus (day 0), animals were subjected to blood sampling and ultrasonography for monitoring circulating progesterone levels and follicular growth. On day 7 of the cycle, animals were administered a PGF2α analogue (Tiaprost Trometamol, 750 μg i.m.) followed by an injection of hCG (2000 IU i.m.) 36 h later. Circulating progesterone levels progressively increased from day 1 of the cycle to 2.26 ± 0.17 ng/ml on day 7 of the cycle, but declined significantly after PGF2α injection. A progressive increase in the size of the dominant follicle was observed by ultrasonography. The follicular fluid estradiol and progesterone concentrations in the dominant follicle were 600 ± 16.7 and 38 ± 7.6 ng/ml, respectively, before hCG injection and the concentration of estradiol decreased to 125.8 ± 25.26 ng/ml, but concentration of progesterone increased to 195 ± 24.6 ng/ml, 24 h post-hCG injection. Inh-α and Cyp19A1 expressions in granulosa cells were maximal in the dominant follicle and declined in response to hCG treatment. Progesterone receptor, oxytocin and cycloxygenase-2 expressions in granulosa cells, regarded as markers of ovulation, were maximal at 24 h post-hCG. The expressions of genes belonging to the super family of proteases were also examined; Cathepsin L expression decreased, while ADAMTS 3 and 5 expressions increased 24 h post-hCG treatment. The results of the current study indicate that sequential treatments of PGF2α and hCG during early estrous cycle in the buffalo cow leads to follicular growth that culminates in ovulation. The model system reported in the present study would be valuable for examining temporo-spatial changes in the periovulatory follicle immediately before and after the onset of gonadotrophin surge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better orcomparable generalization performance over existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The short duration of the Doppler signal and noise content in it necessitate a validation scheme to be incorporated in the electronic processor used for frequency measurement, There are several different validation schemes that can be employed in period timing devices. A detailed study of the influence of these validation schemes on the measured frequency has been reported here. These studies were carried out by using a combination of a fast A/D converter and computer. Doppler bursts obtained from an air flow were digitised and stored on magnetic discs. Suitable computer programs were then used to simulate the performance of period timing devices with different validation schemes and the frequency of the stored bursts were evaluated. It is found that best results are obtained when the validation scheme enables frequency measurement to be made over a large number of cycles within the burst.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a simplified theory of carrier backscattering coefficient in a twofold degenerate asymmetric bilayer graphene nanoribbon (BGN) under the application of a low static electric field. We show that for a highly asymmetric BGN(Delta = gamma), the density of states in the lower subband increases more that of the upper, in which Delta and gamma are the gap and the interlayer coupling constant, respectively. We also demonstrate that under the acoustic phonon scattering regime, the formation of two distinct sets of energy subbands signatures a quantized transmission coefficient as a function of ribbon width and provides an extremely low carrier reflection coefficient for a better Landauer conductance even at room temperature. The well-known result for the ballistic condition has been obtained as a special case of the present analysis under certain limiting conditions which forms an indirect validation of our theoretical formalism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A one-dimensional, biphasic, multicomponent steady-state model based on phenomenological transport equations for the catalyst layer, diffusion layer, and polymeric electrolyte membrane has been developed for a liquid-feed solid polymer electrolyte direct methanol fuel cell (SPE- DMFC). The model employs three important requisites: (i) implementation of analytical treatment of nonlinear terms to obtain a faster numerical solution as also to render the iterative scheme easier to converge, (ii) an appropriate description of two-phase transport phenomena in the diffusive region of the cell to account for flooding and water condensation/evaporation effects, and (iii) treatment of polarization effects due to methanol crossover. An improved numerical solution has been achieved by coupling analytical integration of kinetics and transport equations in the reaction layer, which explicitly include the effect of concentration and pressure gradient on cell polarization within the bulk catalyst layer. In particular, the integrated kinetic treatment explicitly accounts for the nonhomogeneous porous structure of the catalyst layer and the diffusion of reactants within and between the pores in the cathode. At the anode, the analytical integration of electrode kinetics has been obtained within the assumption of macrohomogeneous electrode porous structure, because methanol transport in a liquid-feed SPE- DMFC is essentially a single-phase process because of the high miscibility of methanol with water and its higher concentration in relation to gaseous reactants. A simple empirical model accounts for the effect of capillary forces on liquid-phase saturation in the diffusion layer. Consequently, diffusive and convective flow equations, comprising Nernst-Plank relation for solutes, Darcy law for liquid water, and Stefan-Maxwell equation for gaseous species, have been modified to include the capillary flow contribution to transport. To understand fully the role of model parameters in simulating the performance of the DMCF, we have carried out its parametric study. An experimental validation of model has also been carried out. (C) 2003 The Electrochemical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with an experimental study of pressure-swirl hydraulic injector nozzles using non-intrusive optical techniques. Experiments were conducted to study atomization characteristics using two nozzles with different orifice diameters, 0.3 mm and 0.5 mm, and injection pressures, 0.3-3.5 Mpa, which correspond to Reynolds number (Re-p) = 7,000-45,000, depending on nozzle utilized. Three laser diagnostic techniques were utilized: Shadowgraph, PIV (Particle Image Velocimetry), and PDPA (Phase Doppler Particle Anemometry). Measurements made in the spray in both axial and radial directions indicate that velocity, average droplet diameter profiles, and spray dynamics are highly dependent on the nozzle characteristics and injection pressure. Limitations of these techniques in the different flow regimes, related to the primary and secondary breakups as well as coalescence, are provided. Results indicate that all three techniques provide similar results throughout the different regimes. Shadowgraph and PDPA were possible in the secondary atomization and coalescence regimes while PIV measurements could be made only at the end of secondary atomization and coalescence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study is to validate the applicability of the PolyVinyliDene Fluoride (PVDF) nasal sensor to assess the nasal airflow, in healthy subjects and patients with nasal obstruction and to correlate the results with the score of Visual Analogue Scale (VAS). Methods: PVDF nasal sensor and VAS measurements were carried out in 50 subjects (25-healthy subjects and 25 patients). The VAS score of nasal obstruction and peak-to-peak amplitude (Vp-p) of nasal cycle measured by PVDF nasal sensors were analyzed for right nostril (RN) and left nostril (LN) in both the groups. Spearman's rho correlation was calculated. The relationship between PVDF nasal sensor measurements and severity of nasal obstruction (VAS score) were assessed by ANOVA. Results: In healthy group, the measurement of nasal airflow by PVDF nasal sensor for RN and LN were found to be 51.14 +/- 5.87% and 48.85 +/- 5.87%, respectively. In patient group, PVDF nasal sensor indicated lesser nasal airflow in the blocked nostrils (RN: 23.33 +/- 10.54% and LN: 32.24 +/- 11.54%). Moderate correlation was observed in healthy group (r = 0.710, p < 0.001 for RN and r = 0.651, p < 0.001 for LN), and moderate to strong correlation in patient group (r = 0.751, p < 0.01 for RN and r = 0.885, p < 0.0001 for LN). Conclusion: PVDF nasal sensor method is a newly developed technique for measuring the nasal airflow. Moderate to strong correlation was observed between PVDF nasal sensor data and VAS scores for nasal obstruction. In our present study, PVDF nasal sensor technique successfully differentiated between healthy subjects and patients with nasal obstruction. Additionally, it can also assess severity of nasal obstruction in comparison with VAS. Thus, we propose that the PVDF nasal sensor technique could be used as a new diagnostic method to evaluate nasal obstruction in routine clinical practice. (C) 2015 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precise information on streamflows is of major importance for planning and monitoring of water resources schemes related to hydro power, water supply, irrigation, flood control, and for maintaining ecosystem. Engineers encounter challenges when streamflow data are either unavailable or inadequate at target locations. To address these challenges, there have been efforts to develop methodologies that facilitate prediction of streamflow at ungauged sites. Conventionally, time intensive and data exhaustive rainfall-runoff models are used to arrive at streamflow at ungauged sites. Most recent studies show improved methods based on regionalization using Flow Duration Curves (FDCs). A FDC is a graphical representation of streamflow variability, which is a plot between streamflow values and their corresponding exceedance probabilities that are determined using a plotting position formula. It provides information on the percentage of time any specified magnitude of streamflow is equaled or exceeded. The present study assesses the effectiveness of two methods to predict streamflow at ungauged sites by application to catchments in Mahanadi river basin, India. The methods considered are (i) Regional flow duration curve method, and (ii) Area Ratio method. The first method involves (a) the development of regression relationships between percentile flows and attributes of catchments in the study area, (b) use of the relationships to construct regional FDC for the ungauged site, and (c) use of a spatial interpolation technique to decode information in FDC to construct streamflow time series for the ungauged site. Area ratio method is conventionally used to transfer streamflow related information from gauged sites to ungauged sites. Attributes that have been considered for the analysis include variables representing hydrology, climatology, topography, land-use/land- cover and soil properties corresponding to catchments in the study area. Effectiveness of the presented methods is assessed using jack knife cross-validation. Conclusions based on the study are presented and discussed. (C) 2015 The Authors. Published by Elsevier B.V.