202 resultados para Estimation du débit de filtration glomérulaire
em Indian Institute of Science - Bangalore - Índia
Resumo:
Numerical analysis of cracked structures often involves numerical estimation of stress intensity factors (SIFs) at a crack tip/front. A newly developed formulation called universal crack closure integral (UCCI) for the evaluation of potential energy release rates (PERRs) and the corresponding SIFs is presented in this paper. Unlike the existing element dedicated forms of crack closure integrals (MCCI, VCCI) with application limited to finite element analysis, this new numerical SIF/PERR estimation technique is independent of the basic stress analysis procedure, making it universally applicable. The second merit of this procedure is that it avoids the generally error-producing zones close to the crack tip/front singularity. The UCCI procedure, based on Irwin's original CCI, is formulated and explored using a simple 2D problem of a straight crack in an infinite sheet. It is then applied to some three-dimensional crack geometries with the stresses and displacements obtained from a boundary element program.
Resumo:
With the availability of a huge amount of video data on various sources, efficient video retrieval tools are increasingly in demand. Video being a multi-modal data, the perceptions of ``relevance'' between the user provided query video (in case of Query-By-Example type of video search) and retrieved video clips are subjective in nature. We present an efficient video retrieval method that takes user's feedback on the relevance of retrieved videos and iteratively reformulates the input query feature vectors (QFV) for improved video retrieval. The QFV reformulation is done by a simple, but powerful feature weight optimization method based on Simultaneous Perturbation Stochastic Approximation (SPSA) technique. A video retrieval system with video indexing, searching and relevance feedback (RF) phases is built for demonstrating the performance of the proposed method. The query and database videos are indexed using the conventional video features like color, texture, etc. However, we use the comprehensive and novel methods of feature representations, and a spatio-temporal distance measure to retrieve the top M videos that are similar to the query. In feedback phase, the user activated iterative on the previously retrieved videos is used to reformulate the QFV weights (measure of importance) that reflect the user's preference, automatically. It is our observation that a few iterations of such feedback are generally sufficient for retrieving the desired video clips. The novel application of SPSA based RF for user-oriented feature weights optimization makes the proposed method to be distinct from the existing ones. The experimental results show that the proposed RF based video retrieval exhibit good performance.
Resumo:
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP2O7]4H(2)O and [Co(en)(2)H2P3O10]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.
Resumo:
Enumeration of adhered cells of Thiobacillus ferrooxidans on sulphide minerals through protein assay poses problems due to interference from dissolved mineral constituents. The manner in which sulphide minerals such as pyrite, chalcopyrite, sphalerite, arsenopyrite and pyrrhotite interfere with bacterial protein estimation is demonstrated. Such interferences can be minimised either through dilution or addition of H2O2 to the filtrate after hot alkaline digestion of the biotreated mineral samples.
Resumo:
Let a and s denote the inter arrival times and service times in a GI/GI/1 queue. Let a (n), s (n) be the r.v.s, with distributions as the estimated distributions of a and s from iid samples of a and s of sizes n. Let w be a r.v. with the stationary distribution lr of the waiting times of the queue with input (a, s). We consider the problem of estimating E [w~], tx > 0 and 7r via simulations when (a (n), s (n)) are used as input. Conditions for the accuracy of the asymptotic estimate, continuity of the asymptotic variance and uniformity in the rate of convergence to the estimate are obtained. We also obtain rates of convergence for sample moments, the empirical process and the quantile process for the regenerative processes. Robust estimates are also obtained when an outlier contaminated sample of a and s is provided. In the process we obtain consistency, continuity and asymptotic normality of M-estimators for stationary sequences. Some robustness results for Markov processes are included.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The LISA Parameter Estimation Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.
Resumo:
Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The emission from neutral hydrogen (HI) clouds in the post-reionization era (z <= 6), too faint to be individually detected, is present as a diffuse background in all low frequency radio observations below 1420MHz. The angular and frequency fluctuations of this radiation (similar to 1 mK) are an important future probe of the large-scale structures in the Universe. We show that such observations are a very effective probe of the background cosmological model and the perturbed Universe. In our study we focus on the possibility of determining the redshift-space distortion parameter beta, coordinate distance r(nu), and its derivative with redshift r(nu)('). Using reasonable estimates for the observational uncertainties and configurations representative of the ongoing and upcoming radio interferometers, we predict parameter estimation at a precision comparable with supernova Ia observations and galaxy redshift surveys, across a wide range in redshift that is only partially accessed by other probes. Future HI observations of the post-reionization era present a new technique, complementing several existing ones, to probe the expansion history and to elucidate the nature of the dark energy.
Resumo:
Forested areas play a dominant role in the global hydrological cycle. Evapotranspiration is a dominant component most of the time catching up with the rainfall. Though there are sophisticated methods which are available for its estimation, a simple reliable tool is needed so that a good budgeting could be made. Studies have established that evapotranspiration in forested areas is much higher than in agricultural areas. Latitude, type of forests, climate and geological characteristics also add to the complexity of its estimation. Few studies have compared different methods of evapotranspiration on forested watersheds in semi arid tropical forests. In this paper a comparative study of different methods of estimation of evapotranspiration is made with reference to the actual measurements made using all parameter climatological station data of a small deciduous forested watershed of Mulehole (area of 4.5 km2 ), South India. Potential evapotranspiration (ETo) was calculated using ten physically based and empirical methods. Actual evapotranspiration (AET) has been calculated through computation of water balance through SWAT model. The Penman-Montieth method has been used as a benchmark to compare the estimates arrived at using various methods. The AET calculated shows good agreement with the curve for evapotranspiration for forests worldwide. Error estimates have been made with respect to Penman-Montieth method. This study could give an idea of the errors involved whenever methods with limited data are used and also show the use indirect methods in estimation of Evapotranspiration which is more suitable for regional scale studies.
Resumo:
This paper presents the site classification of Bangalore Mahanagar Palike (BMP) area using geophysical data and the evaluation of spectral acceleration at ground level using probabilistic approach. Site classification has been carried out using experimental data from the shallow geophysical method of Multichannel Analysis of Surface wave (MASW). One-dimensional (1-D) MASW survey has been carried out at 58 locations and respective velocity profiles are obtained. The average shear wave velocity for 30 m depth (Vs(30)) has been calculated and is used for the site classification of the BMP area as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs(30) values major part of the BMP area can be classified as ``site class D'', and ``site class C'. A smaller portion of the study area, in and around Lalbagh Park, is classified as ``site class B''. Further, probabilistic seismic hazard analysis has been carried out to map the seismic hazard in terms spectral acceleration (S-a) at rock and the ground level considering the site classes and six seismogenic sources identified. The mean annual rate of exceedance and cumulative probability hazard curve for S. have been generated. The quantified hazard values in terms of spectral acceleration for short period and long period are mapped for rock, site class C and D with 10% probability of exceedance in 50 years on a grid size of 0.5 km. In addition to this, the Uniform Hazard Response Spectrum (UHRS) at surface level has been developed for the 5% damping and 10% probability of exceedance in 50 years for rock, site class C and D These spectral acceleration and uniform hazard spectrums can be used to assess the design force for important structures and also to develop the design spectrum.
Resumo:
In this paper, we present an approach to estimate fractal complexity of discrete time signal waveforms based on computation of area bounded by sample points of the signal at different time resolutions. The slope of best straight line fit to the graph of log(A(rk)A / rk(2)) versus log(l/rk) is estimated, where A(rk) is the area computed at different time resolutions and rk time resolutions at which the area have been computed. The slope quantifies complexity of the signal and it is taken as an estimate of the fractal dimension (FD). The proposed approach is used to estimate the fractal dimension of parametric fractal signals with known fractal dimensions and the method has given accurate results. The estimation accuracy of the method is compared with that of Higuchi's and Sevcik's methods. The proposed method has given more accurate results when compared with that of Sevcik's method and the results are comparable to that of the Higuchi's method. The practical application of the complexity measure in detecting change in complexity of signals is discussed using real sleep electroencephalogram recordings from eight different subjects. The FD-based approach has shown good performance in discriminating different stages of sleep.
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.