88 resultados para Readability Assessment
em Indian Institute of Science - Bangalore - Índia
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
The demand for tunnelling and underground space creation is rapidly growing due to the requirement of civil infrastructure projects and urbanisation. Blasting remains the most inexpensive method of underground excavations in hard rock. Unfortunately, there are no specific safety guidelines available for the blasted tunnels with regards to the threshold limits of vibrations caused by repeated blasting activity in the close proximity. This paper presents the results of a comprehensive study conducted to find out the effect of repeated blast loading on the damage experienced by jointed basaltic rock mass during tunnelling works. Conducting of multiple rounds of blasts for various civil excavations in a railway tunnel imparted repeated loading on rock mass of sidewall and roof of the tunnel. The blast induced damage was assessed by using vibration attenuation equations of charge weight scaling law and measured by borehole extensometers and borehole camera. Ground vibrations of each blasting round were also monitored by triaxial geophones installed near the borehole extensometers. The peak particle velocity (V-max) observations and plastic deformations from borehole extensometers were used to develop a site specific damage model. The study reveals that repeated dynamic loading imparted on the exposed tunnel from subsequent blasts, in the vicinity, resulted in rock mass damage at lesser vibration levels than the critical peak particle velocity (V-cr). It was found that, the repeated blast loading resulted in the near-field damage due to high frequency waves and far-field damage due to low frequency waves. The far field damage, after 45-50 occurrences of blast loading, was up to 55% of the near-field damage in basaltic rock mass. The findings of the study clearly indicate that the phenomena of repeated blasting with respect to number of cycles of loading should be taken into consideration for proper assessment of blast induced damage in underground excavations.
Resumo:
Improving access to safe drinking water can result in multi-dimensional impacts on people's livelihood. This has been aptly reflected in the Millennium Development Goals (MDG) as one of the major objectives. Despite the availability of diverse and complex set of technologies for water purification, pragmatic and cost-effective use of the same is impeding the use of available sources of water. Hence, in country like India simple low-energy technologies such as solar still are likely to succeed. Solar stills would suffice the basic minimum drinking water requirements of man. Solar stills use sunlight, to kill or inactivate many, if not all, of the pathogens found in water. This paper provides an integrated assessment of the suitability of domestic solar still as a viable safe water technology for India. Also an attempt has been made to critically assess the operational feasibility and costs incurred for using this technology in rural India.
Resumo:
The Ozone Monitoring Instrument (OMI) aboard EOS-Aura and the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard EOS-Aqua fly in formation as part of the A-train. Though OMI retrieves aerosol optical depth (AOD) and aerosol absorption, it must assume aerosol layer height. The MODIS cannot retrieve aerosol absorption, but MODIS aerosol retrieval is not sensitive to aerosol layer height and with its smaller pixel size is less affected by subpixel clouds. Here we demonstrate an approach that uses MODIS-retrieved AOD to constrain the OMI retrieval, freeing OMI from making an a priori estimate of aerosol height and allowing a more direct retrieval of aerosol absorption. To predict near-UV optical depths using MODIS data we rely on the spectral curvature of the MODIS-retrieved visible and near-IR spectral AODs. Application of an OMI-MODIS joint retrieval over the north tropical Atlantic shows good agreement between OMI and MODIS-predicted AODs in the UV, which implies that the aerosol height assumed in the OMI-standard algorithm is probably correct. In contrast, over the Arabian Sea, MODIS-predicted AOD deviated from the OMI-standard retrieval, but combined OMI-MODIS retrievals substantially improved information on aerosol layer height (on the basis of validation against airborne lidar measurements). This implies an improvement in the aerosol absorption retrieval, but lack of UV absorption measurements prevents a true validation. Our study demonstrates the potential of multisatellite analysis of A-train data to improve the accuracy of retrieved aerosol products and suggests that a combined OMI-MODIS-CALIPSO retrieval has large potential to further improve assessments of aerosol absorption.
Resumo:
This paper presents a new approach for assessing power system voltage stability based on artificial feed forward neural network (FFNN). The approach uses real and reactive power, as well as voltage vectors for generators and load buses to train the neural net (NN). The input properties of the NN are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The performance of the trained NN is investigated on two systems under various voltage stability assessment conditions. Main advantage is that the proposed approach is fast, robust, accurate and can be used online for predicting the L-indices of all the power system buses simultaneously. The method can also be effectively used to determining local and global stability margin for further improvement measures.
Resumo:
The objective of the current study was to investigate the mechanism by which the corpus luteum (CL) of the monkey undergoes desensitization to luteinizing hormone following exposure to increasing concentration of human chorionic gonadotrophin (hCG) as it occurs in pregnancy. Female bonnet monkeys were injected (im) increasing doses of hCG or dghCG beginning from day 6 or 12 of the luteal phase for either 10 or 4 or 2 days. The day of oestrogen surge was considered as day '0' of luteal phase. Luteal cells obtained from CL of these animals were incubated with hCG (2 and 200 pg/ml) or dbcAMP (2.5, 25 and 100 mu M) for 3 h at 37 degrees C and progesterone secreted was estimated. Corpora lutea of normal cycling monkeys on day 10/16/22 of the luteal phase were used as controls, In addition the in vivo response to CG and deglycosylated hCG (dghCG) was assessed by determining serum steroid profiles following their administration. hCG (from 15-90 IU) but not dghCG (15-90 IU) treatment in vivo significantly (P < 0.05) elevated serum progesterone and oestradiol levels. Serum progesterone, however, could not be maintained at a elevated level by continuous treatment with hCG (from day 6-15), the progesterone level declining beyond day 13 of luteal phase. Administering low doses of hCG (15-90 IU/day) from day 6-9 or high doses (600 IU/day) on days 8 and 9 of the luteal phase resulted in significant increase (about 10-fold over corresponding control P < 0.005) in the ability of luteal cells to synthesize progesterone (incubated controls) in vitro. The luteal cells of the treated animals responded to dbcAMP (P < 0.05) but not to hCG added in vitro, The in vitro response of luteal cells to added hCG was inhibited by 0, 50 and 100% if the animals were injected with low (15-90 IU) or medium (100 IU) between day 6-9 of luteal phase and high (600 IU on day 8 and 9 of luteal phase) doses of dghCG respectively; such treatment had no effect on responsivity of the cells to dbcAMP, The luteal cell responsiveness to dbcAMP in vitro was also blocked if hCG was administered for 10 days beginning day 6 of the luteal phase. Though short term hCG treatment during late luteal phase (from days 12-15) had no effect on luteal function, 10 day treatment beginning day 12 of luteal phase resulted in regain of in vitro responsiveness to both hCG (P < 0.05) and dbcAMP (P < 0.05) suggesting that luteal rescue can occur even at this late stage. In conclusion, desensitization of the CL to hCG appears to be governed by the dose/period for which it is exposed to hCG/dghCG. That desensitization is due to receptor occupancy is brought out by the fact that (i) this can be achieved by giving a larger dose of hCG over a 2 day period instead of a lower dose of the hormone for a longer (4 to 10 days) period and (ii) the effect can largely be reproduced by using dghCG instead of hCG to block the receptor sites. It appears that to achieve desensitization to dbcAMP also it is necessary to expose the luteal cell to relatively high dose of hCG for more than 4 days.
Resumo:
Crystals growing from solution, the vapour phase and from supercooled melt exhibit, as a rule, planar faces. The geometry and distribution of dislocations present within the crystals thus grown are strongly related to the growth on planar faces and to the different growth sectors rather than the physical properties of the crystals and the growth methods employed. As a result, many features of generation and geometrical arrangement of defects are common to extremely different crystal species. In this paper these commoner aspects of dislocation generation and configuration which permits one to predict their nature and distribution are discussed. For the purpose of imaging the defects a very versatile and widely applicable technique viz. x-ray diffraction topography is used. Growth dislocations in solution grown crystals follow straight path with strongly defined directions. These preferred directions which in most cases lie within an angle of ±15° to the growth normal depend on the growth direction and on the Burger's vector involved. The potential configuration of dislocations in the growing crystals can be evaluated using the theory developed by Klapper which is based on linear anisotropic elastic theory. The preferred line direction of a particular dislocation corresponds to that in which the dislocation energy per unit growth length is a minimum. The line direction analysis based on this theory enables one to characterise dislocations propagating in a growing crystal. A combined theoretical analysis and experimental investigation based on the above theory is presented.
Resumo:
Several N,N -dipyridyl- and N-phenyl-N -pyridyl-thioureas were examined in different solvents at various temperatures by 1H NMR in order to study their conformational properties. The influence of concentration and the methyl substituent in the pyridine ring on the chemical shifts of the NH and pyridine groups was investigated. The observed chemical shifts are analysed in terms of the conformational properties of the molecules. Free energy barriers to the internal rotation about the C N bonds have been determined. Infrared spectra have been measured to supplement the NMR studies. Intramolecular hydrogen bonding played a major role in the preferred conformation of pyridylthioureas. The data further revealed an interesting dynamic exchange phenomenon occurring in symmetric N,N -dipyridylthioureas between two intramolecularly hydrogen bonded conformers.
Calciothermic reduction of TiO2: A diagrammatic assessment of the thermodynamic limit of deoxidation
Resumo:
Calciothermic reduction of TiO2 provides a potentially low-cost route to titanium production. Presented in this article is a suitably designed diagram, useful for assessing the degree of reduction of TiO2 and residual oxygen contamination in metal as a function of reduction temperature and other process parameters. The oxygen chemical potential diagram à la Ellingham-Richardson-Jeffes is useful for visualization of the thermodynamics of reduction reactions at high temperatures. Although traditionally the diagram depicts oxygen potentials corresponding to the oxidation of different metals to their corresponding oxides or of lower oxides to higher oxides, oxygen potentials associated with solution phases at constant composition can be readily superimposed. The usefulness of the diagram for an insightful analysis of calciothermic reduction, either direct or through an electrochemical process, is discussed. Identified are possible process variations, modeling and optimization strategies.
Resumo:
The paper presents a method for the evaluation of external stability of reinforced soil walls subjected to earthquakes in the framework of the pseudo-dynamic method. The seismic reliability of the wall is evaluated by considering the different possible failure modes such as sliding along the base, overturning about the toe point of the wall, bearing capacity and the eccentricity of the resultant force. The analysis is performed considering properties of the reinforced backfill, foundation soil below the base of the wall, length of the geosynthetic reinforcement and characteristics of earthquake ground motions such as shear wave and primary wave velocity as random variables. The optimum length of reinforcement needed to maintain stability against four modes of failure by targeting various component reliability indices is obtained. Differences between pseudo-static and pseudo-dynamic methods are clearly highlighted in the paper. A complete analysis of pseudo-static and pseudo-dynamic methodologies shows that the pseudodynamic method results in realistic design values for the length of geosynthetic reinforcement under earthquake conditions.
Resumo:
The classic work of Richardson and Gaunt [1 ], has provided an effective means of extrapolating the limiting result in an approximate analysis. From the authors' work on "Bounds for eigenvalues" [2-4] an interesting alternate method has emerged for assessing monotonically convergent approximate solutions by generating close bounds. Whereas further investigation is needed to put this work on sound theoretical foundation, we intend this letter to announce a possibility, which was confirmed by an exhaustive set of examples.
Resumo:
Uncertainties associated with the structural model and measured vibration data may lead to unreliable damage detection. In this paper, we show that geometric and measurement uncertainty cause considerable problem in damage assessment which can be alleviated by using a fuzzy logic-based approach for damage detection. Curvature damage factor (CDF) of a tapered cantilever beam are used as damage indicators. Monte Carlo simulation (MCS) is used to study the changes in the damage indicator due to uncertainty in the geometric properties of the beam. Variation in these CDF measures due to randomness in structural parameter, further contaminated with measurement noise, are used for developing and testing a fuzzy logic system (FLS). Results show that the method correctly identifies both single and multiple damages in the structure. For example, the FLS detects damage with an average accuracy of about 95 percent in a beam having geometric uncertainty of 1 percent COV and measurement noise of 10 percent in single damage scenario. For multiple damage case, the FLS identifies damages in the beam with an average accuracy of about 94 percent in the presence of above mentioned uncertainties. The paper brings together the disparate areas of probabilistic analysis and fuzzy logic to address uncertainty in structural damage detection.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
We propose a self-regularized pseudo-time marching strategy for ill-posed, nonlinear inverse problems involving recovery of system parameters given partial and noisy measurements of system response. While various regularized Newton methods are popularly employed to solve these problems, resulting solutions are known to sensitively depend upon the noise intensity in the data and on regularization parameters, an optimal choice for which remains a tricky issue. Through limited numerical experiments on a couple of parameter re-construction problems, one involving the identification of a truss bridge and the other related to imaging soft-tissue organs for early detection of cancer, we demonstrate the superior features of the pseudo-time marching schemes.