951 resultados para relative chlorophyll index
Resumo:
It has been observed that a majority of glaciers in the Himalayas have been retreating. In this paper, we show that there are two major factors which control the advance/retreat of the Himalayan glaciers. They are the slope of the glacier and changes in the equilibrium line altitude. While it is well known, that these factors are important, we propose a new way of combining them and use it to predict retreat. The functional form of this model has been derived from numerical simulations using an ice-flow code. The model has been successfully applied to the movement of eight Himalayan glaciers during the past 25 years. It explains why the Gangotri glacier is retreating while Zemu of nearly the same length is stationary, even if they are subject to similar environmental changes. The model has also been applied to a larger set of glaciers in the Parbati basin, for which retreat based on satellite data is available, though over a shorter time period.
Resumo:
Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
This paper extends some geometric properties of a one-parameter family of relative entropies. These arise as redundancies when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the Kullback-Leibler divergence. They satisfy the Pythagorean property and behave like squared distances. This property, which was known for finite alphabet spaces, is now extended for general measure spaces. Existence of projections onto convex and certain closed sets is also established. Our results may have applications in the Rényi entropy maximization rule of statistical physics.
Resumo:
This work presents a finite element-based strategy for exterior acoustical problems based on an assumed pressure form that favours outgoing waves. The resulting governing equation, weak formulation, and finite element formulation are developed both for coupled and uncoupled problems. The developed elements are very similar to conventional elements in that they are based on the standard Galerkin variational formulation and use standard Lagrange interpolation functions and standard Gaussian quadrature. In addition and in contrast to wave envelope formulations and their extensions, the developed elements can be used in the immediate vicinity of the radiator/scatterer. The method is similar to the perfectly matched layer (PML) method in the sense that each layer of elements added around the radiator absorbs acoustical waves so that no boundary condition needs to be applied at the outermost boundary where the domain is truncated. By comparing against strategies such as the PML and wave-envelope methods, we show that the relative accuracy, both in the near and far-field results, is considerably higher.
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
The analysis of a fully integrated optofluidic lab-on-a-chip sensor is presented in this paper. This device is comprised of collinear input and output waveguides that are separated by a microfluidic channel. When light is passed through the analyte contained in the fluidic gap, optical power loss occurs owing to absorption of light. Apart from absorption, a mode-mismatch between the input and output waveguides occurs when the light propagates through the fluidic gap. The degree of mode-mismatch and quantum of optical power loss due to absorption of light by the fluid form the basis of our analysis. This sensor can detect changes in refractive index and changes in concentration of species contained in the analyte. The sensitivity to detect minute changes depends on many parameters. The parameters that influence the sensitivity of the sensor are mode spot size, refractive index of the fluid, molar concentration of the species contained in the analyte, width of the fluidic gap, and waveguide geometry. By correlating various parameters, an optimal fluidic gap distance corresponding to a particular mode spot size that achieves the best sensitivity is determined both for refractive index and absorbance-based sensing.
Resumo:
Geologic evidence along the northern part of the 2004 Aceh-Andaman rupture suggests that this region generated as many as five tsunamis in the prior 2000years. We identify this evidence by drawing analogy with geologic records of land-level change and the tsunami in 2004 from the Andaman and Nicobar Islands (A&N). These analogs include subsided mangrove swamps, uplifted coral terraces, liquefaction, and organic soils coated by sand and coral rubble. The pre-2004 evidence varies in potency, and materials dated provide limiting ages on inferred tsunamis. The earliest tsunamis occurred between the second and sixth centuries A.D., evidenced by coral debris of the southern Car Nicobar Island. A subsequent tsunami, probably in the range A.D. 770-1040, is inferred from deposits both in A&N and on the Indian subcontinent. It is the strongest candidate for a 2004-caliber earthquake in the past 2000years. A&N also contain tsunami deposits from A.D. 1250 to 1450 that probably match those previously reported from Sumatra and Thailand, and which likely date to the 1390s or 1450s if correlated with well-dated coral uplift offshore Sumatra. Thus, age data from A&N suggest that within the uncertainties in estimating relative sizes of paleo-earthquakes and tsunamis, the 1000year interval can be divided in half by the earthquake or earthquakes of A.D. 1250-1450 of magnitude >8.0 and consequent tsunamis. Unlike the transoceanic tsunamis generated by full or partial rupture of the subduction interface, the A&N geology further provides evidence for the smaller-sized historical tsunamis of 1762 and 1881, which may have been damaging locally.
Resumo:
This paper illustrates a Wavelet Coefficient based approach using experiments to understand the sensitivity of ultrasonic signals due to parametric variation of a crack configuration in a metal plate. A PZT patch sensor/actuator system integrated to a metal plate with through-thickness crack is used. The proposed approach uses piezoelectric patches, which can be used to both actuate and sense the ultrasonic signals. While this approach leads to more flexibility and reduced cost for larger scalability of the sensor/actuator network, the complexity of the signals increases as compared to what is encountered in conventional ultrasonic NDE problems using selective wave modes. A Damage Index (DI) has been introduced, which is function of wavelet coefficient. Experiments have been carried out for various crack sizes, crack orientations and band-limited tone-burst signal through FIR filter. For a 1 cm long crack interrogated with 20 kHz tone-burst signal, the Damage Index (DI) for the horizontal crack orientation increases by about 70% with respect to that for 135 degrees oriented crack and it increases by about 33% with respect to the vertically oriented crack. The detailed results reported in this paper is a step forward to developing computational schemes for parametric identification of damage using sensor/actuator network and ultrasonic wave.
Resumo:
The sensing of carbon dioxide (CO2) at room temperature, which has potential applications in environmental monitoring, healthcare, mining, biotechnology, food industry, etc., is a challenge for the scientific community due to the relative inertness of CO2. Here, we propose a novel gas sensor based on clad-etched Fiber Bragg Grating (FBG) with polyallylamine-amino-carbon nanotube coated on the surface of the core for detecting the concentrations of CO2 gas at room temperature, in ppm levels over a wide range (1000 ppm-4000 ppm). The limit of detection observed in polyallylamine-amino-carbon nanotube coated core-FBG has been found to be about 75 ppm. In this approach, when CO2 gas molecules interact with the polyallylamine-amino-carbon nanotube coated FBG, the effective refractive index of the fiber core changes, resulting in a shift in Bragg wavelength. The experimental data show a linear response of Bragg wavelength shift for increase in concentration of CO2 gas. Besides being reproducible and repeatable, the technique is fast, compact, and highly sensitive. (C) 2013 AIP Publishing LLC.
Resumo:
This study borrows the measures developed for the operation of water resources systems as a means of characterizing droughts in a given region. It is argued that the common approach of assessing drought using a univariate measure (severity or reliability) is inadequate as decision makers need assessment of the other facets considered here. It is proposed that the joint distribution of reliability, resilience, and vulnerability (referred to as RRV in a reservoir operation context), assessed using soil moisture data over the study region, be used to characterize droughts. Use is made of copulas to quantify the joint distribution between these variables. As reliability and resilience vary in a nonlinear but almost deterministic way, the joint probability distribution of only resilience and vulnerability is modeled. Recognizing the negative association between the two variables, a Plackett copula is used to formulate the joint distribution. The developed drought index, referred to as the drought management index (DMI), is able to differentiate the drought proneness of a given area when compared to other areas. An assessment of the sensitivity of the DMI to the length of the data segments used in evaluation indicates relative stability is achieved if the data segments are 5years or longer. The proposed approach is illustrated with reference to the Malaprabha River basin in India, using four adjoining Climate Prediction Center grid cells of soil moisture data that cover an area of approximately 12,000 km(2). (C) 2013 American Society of Civil Engineers.
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.
Resumo:
Automatic and accurate detection of the closure-burst transition events of stops and affricates serves many applications in speech processing. A temporal measure named the plosion index is proposed to detect such events, which are characterized by an abrupt increase in energy. Using the maxima of the pitch-synchronous normalized cross correlation as an additional temporal feature, a rule-based algorithm is designed that aims at selecting only those events associated with the closure-burst transitions of stops and affricates. The performance of the algorithm, characterized by receiver operating characteristic curves and temporal accuracy, is evaluated using the labeled closure-burst transitions of stops and affricates of the entire TIMIT test and training databases. The robustness of the algorithm is studied with respect to global white and babble noise as well as local noise using the TIMIT test set and on telephone quality speech using the NTIMIT test set. For these experiments, the proposed algorithm, which does not require explicit statistical training and is based on two one-dimensional temporal measures, gives a performance comparable to or better than the state-of-the-art methods. In addition, to test the scalability, the algorithm is applied on the Buckeye conversational speech corpus and databases of two Indian languages. (C) 2014 Acoustical Society of America.
Resumo:
Formation flying of small spacecraft provides a way to improve the resolution by aperture distribution. This requires autonomous control of relative position and relative attitude. The present work addresses the formation control using a PID controller to maintain both relative position and relative attitude. To avoid continuous pulsing due to noise, a dead-band has been provided in the position loop. PID control has been selected to maintain the formation in the presence of unmodeled disturbances. Simulations show that the proposed controller meets the required translational and rotational relative motions even in the presence of disturbances.
Resumo:
Monophasic Ba2NaNb5O15 was crystallized at nanometer scale (12-36 nm) in 2BaO-0.5Na(2)O-2.5Nb(2)O(5)- 4.5B(2)O(3) glass system. To begin with, optically transparent glasses, in this system, were fabricated via the conventional melt. quenching technique. The amorphous and glassy characteristics of the as-quenched samples were respectively confirmed by X-ray powder diffraction and differential thermal analyses. Nearly homogeneous distribution of Ba2NaNb5O15 (BNN) nanocrystals associated with tungsten bronze structure akin to their bulk parent structure was accomplished by subjecting the as-fabricated glasses to appropriate heat-treatment temperatures. Indeed transmission electron microscopy (TEM) carried out on these samples corroborated the presence of Ba2NaNb5O15 nanocrystals dispersed in a continuous glass matrix. The as-quenched glasses were similar to 75% transparent in the visible range of the electromagnetic spectrum. The optical band gap and refractive index were found to have crystallite size (at nanoscale) dependence. The optical band gap increased with the decrease in crystallite size. The refractive indices of the glass nanocrystal composites as determined by Brewster angle method were rationalized using different empirical models. The refractive index dispersion with wavelength of light was analyzed on the basis of the Sellmeier relations. At room temperature under UV excitation (355 nm) these glass nanocrystal composites displayed violet-blue emission which was ascribed to the defects states.