269 resultados para DETERMINISTIC ESTIMATION
em Indian Institute of Science - Bangalore - Índia
Resumo:
A Monte Carlo model of ultrasound modulation of multiply scattered coherent light in a highly scattering media has been carried out for estimating the phase shift experienced by a photon beam on its transit through US insonified region. The phase shift is related to the tissue stiffness, thereby opening an avenue for possible breast tumor detection. When the scattering centers in the tissue medium is exposed to a deterministic forcing with the help of a focused ultrasound (US) beam, due to the fact that US-induced oscillation is almost along particular direction, the direction defined by the transducer axis, the scattering events increase, thereby increasing the phase shift experienced by light that traverses through the medium. The phase shift is found to increase with increase in anisotropy g of the medium. However, as the size of the focused region which is the region of interest (ROI) increases, a large number of scattering events take place within the ROI, the ensemble average of the phase shift (Delta phi) becomes very close to zero. The phase of the individual photon is randomly distributed over 2 pi when the scattered photon path crosses a large number of ultrasound wavelengths in the focused region. This is true at high ultrasound frequency (1 MHz) when mean free path length of photon l(s) is comparable to wavelength of US beam. However, at much lower US frequencies (100 Hz), the wavelength of sound is orders of magnitude larger than l(s), and with a high value of g (g 0.9), there is a distinct measurable phase difference for the photon that traverses through the insonified region. Experiments are carried out for validation of simulation results.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
Numerical analysis of cracked structures often involves numerical estimation of stress intensity factors (SIFs) at a crack tip/front. A newly developed formulation called universal crack closure integral (UCCI) for the evaluation of potential energy release rates (PERRs) and the corresponding SIFs is presented in this paper. Unlike the existing element dedicated forms of crack closure integrals (MCCI, VCCI) with application limited to finite element analysis, this new numerical SIF/PERR estimation technique is independent of the basic stress analysis procedure, making it universally applicable. The second merit of this procedure is that it avoids the generally error-producing zones close to the crack tip/front singularity. The UCCI procedure, based on Irwin's original CCI, is formulated and explored using a simple 2D problem of a straight crack in an infinite sheet. It is then applied to some three-dimensional crack geometries with the stresses and displacements obtained from a boundary element program.
Resumo:
With the availability of a huge amount of video data on various sources, efficient video retrieval tools are increasingly in demand. Video being a multi-modal data, the perceptions of ``relevance'' between the user provided query video (in case of Query-By-Example type of video search) and retrieved video clips are subjective in nature. We present an efficient video retrieval method that takes user's feedback on the relevance of retrieved videos and iteratively reformulates the input query feature vectors (QFV) for improved video retrieval. The QFV reformulation is done by a simple, but powerful feature weight optimization method based on Simultaneous Perturbation Stochastic Approximation (SPSA) technique. A video retrieval system with video indexing, searching and relevance feedback (RF) phases is built for demonstrating the performance of the proposed method. The query and database videos are indexed using the conventional video features like color, texture, etc. However, we use the comprehensive and novel methods of feature representations, and a spatio-temporal distance measure to retrieve the top M videos that are similar to the query. In feedback phase, the user activated iterative on the previously retrieved videos is used to reformulate the QFV weights (measure of importance) that reflect the user's preference, automatically. It is our observation that a few iterations of such feedback are generally sufficient for retrieving the desired video clips. The novel application of SPSA based RF for user-oriented feature weights optimization makes the proposed method to be distinct from the existing ones. The experimental results show that the proposed RF based video retrieval exhibit good performance.
Resumo:
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP2O7]4H(2)O and [Co(en)(2)H2P3O10]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.
Resumo:
Enumeration of adhered cells of Thiobacillus ferrooxidans on sulphide minerals through protein assay poses problems due to interference from dissolved mineral constituents. The manner in which sulphide minerals such as pyrite, chalcopyrite, sphalerite, arsenopyrite and pyrrhotite interfere with bacterial protein estimation is demonstrated. Such interferences can be minimised either through dilution or addition of H2O2 to the filtrate after hot alkaline digestion of the biotreated mineral samples.
Resumo:
Let a and s denote the inter arrival times and service times in a GI/GI/1 queue. Let a (n), s (n) be the r.v.s, with distributions as the estimated distributions of a and s from iid samples of a and s of sizes n. Let w be a r.v. with the stationary distribution lr of the waiting times of the queue with input (a, s). We consider the problem of estimating E [w~], tx > 0 and 7r via simulations when (a (n), s (n)) are used as input. Conditions for the accuracy of the asymptotic estimate, continuity of the asymptotic variance and uniformity in the rate of convergence to the estimate are obtained. We also obtain rates of convergence for sample moments, the empirical process and the quantile process for the regenerative processes. Robust estimates are also obtained when an outlier contaminated sample of a and s is provided. In the process we obtain consistency, continuity and asymptotic normality of M-estimators for stationary sequences. Some robustness results for Markov processes are included.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The LISA Parameter Estimation Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.
Resumo:
Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The emission from neutral hydrogen (HI) clouds in the post-reionization era (z <= 6), too faint to be individually detected, is present as a diffuse background in all low frequency radio observations below 1420MHz. The angular and frequency fluctuations of this radiation (similar to 1 mK) are an important future probe of the large-scale structures in the Universe. We show that such observations are a very effective probe of the background cosmological model and the perturbed Universe. In our study we focus on the possibility of determining the redshift-space distortion parameter beta, coordinate distance r(nu), and its derivative with redshift r(nu)('). Using reasonable estimates for the observational uncertainties and configurations representative of the ongoing and upcoming radio interferometers, we predict parameter estimation at a precision comparable with supernova Ia observations and galaxy redshift surveys, across a wide range in redshift that is only partially accessed by other probes. Future HI observations of the post-reionization era present a new technique, complementing several existing ones, to probe the expansion history and to elucidate the nature of the dark energy.
Resumo:
Forested areas play a dominant role in the global hydrological cycle. Evapotranspiration is a dominant component most of the time catching up with the rainfall. Though there are sophisticated methods which are available for its estimation, a simple reliable tool is needed so that a good budgeting could be made. Studies have established that evapotranspiration in forested areas is much higher than in agricultural areas. Latitude, type of forests, climate and geological characteristics also add to the complexity of its estimation. Few studies have compared different methods of evapotranspiration on forested watersheds in semi arid tropical forests. In this paper a comparative study of different methods of estimation of evapotranspiration is made with reference to the actual measurements made using all parameter climatological station data of a small deciduous forested watershed of Mulehole (area of 4.5 km2 ), South India. Potential evapotranspiration (ETo) was calculated using ten physically based and empirical methods. Actual evapotranspiration (AET) has been calculated through computation of water balance through SWAT model. The Penman-Montieth method has been used as a benchmark to compare the estimates arrived at using various methods. The AET calculated shows good agreement with the curve for evapotranspiration for forests worldwide. Error estimates have been made with respect to Penman-Montieth method. This study could give an idea of the errors involved whenever methods with limited data are used and also show the use indirect methods in estimation of Evapotranspiration which is more suitable for regional scale studies.