56 resultados para Allele frequency data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regionalization approaches are widely used in water resources engineering to identify hydrologically homogeneous groups of watersheds that are referred to as regions. Pooled information from sites (depicting watersheds) in a region forms the basis to estimate quantiles associated with hydrological extreme events at ungauged/sparsely gauged sites in the region. Conventional regionalization approaches can be effective when watersheds (data points) corresponding to different regions can be separated using straight lines or linear planes in the space of watershed related attributes. In this paper, a kernel-based Fuzzy c-means (KFCM) clustering approach is presented for use in situations where such linear separation of regions cannot be accomplished. The approach uses kernel-based functions to map the data points from the attribute space to a higher-dimensional space where they can be separated into regions by linear planes. A procedure to determine optimal number of regions with the KFCM approach is suggested. Further, formulations to estimate flood quantiles at ungauged sites with the approach are developed. Effectiveness of the approach is demonstrated through Monte-Carlo simulation experiments and a case study on watersheds in United States. Comparison of results with those based on conventional Fuzzy c-means clustering, Region-of-influence approach and a prior study indicate that KFCM approach outperforms the other approaches in forming regions that are closer to being statistically homogeneous and in estimating flood quantiles at ungauged sites. Key Points Kernel-based regionalization approach is presented for flood frequency analysis Kernel procedure to estimate flood quantiles at ungauged sites is developed A set of fuzzy regions is delineated in Ohio, USA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of reconstructing a sparse signal from its DFT magnitude. We refer to this problem as the sparse phase retrieval (SPR) problem, which finds applications in tomography, digital holography, electron microscopy, etc. We develop a Fienup-type iterative algorithm, referred to as the Max-K algorithm, to enforce sparsity and successively refine the estimate of phase. We show that the Max-K algorithm possesses Cauchy convergence properties under certain conditions, that is, the MSE of reconstruction does not increase with iterations. We also formulate the problem of SPR as a feasibility problem, where the goal is to find a signal that is sparse in a known basis and whose Fourier transform magnitude is consistent with the measurement. Subsequently, we interpret the Max-K algorithm as alternating projections onto the object-domain and measurement-domain constraint sets and generalize it to a parameterized relaxation, known as the relaxed averaged alternating reflections (RAAR) algorithm. On the application front, we work with measurements acquired using a frequency-domain optical-coherence tomography (FDOCT) experimental setup. Experimental results on measured data show that the proposed algorithms exhibit good reconstruction performance compared with the direct inversion technique, homomorphic technique, and the classical Fienup algorithm without sparsity constraint; specifically, the autocorrelation artifacts and background noise are suppressed to a significant extent. We also demonstrate that the RAAR algorithm offers a broader framework for FDOCT reconstruction, of which the direct inversion technique and the proposed Max-K algorithm become special instances corresponding to specific values of the relaxation parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The temperature (300-973K) and frequency (100Hz-10MHz) response of the dielectric and impedance characteristics of 2BaO-0.5Na(2)O-2.5Nb(2)O(5)-4.5B(2)O(3) glasses and glass nanocrystal composites were studied. The dielectric constant of the glass was found to be almost independent of frequency (100Hz-10MHz) and temperature (300-600K). The temperature coefficient of dielectric constant was 8 +/- 3ppm/K in the 300-600K temperature range. The relaxation and conduction phenomena were rationalized using modulus formalism and universal AC conductivity exponential power law, respectively. The observed relaxation behavior was found to be thermally activated. The complex impedance data were fitted using the least square method. Dispersion of Barium Sodium Niobate (BNN) phase at nanoscale in a glass matrix resulted in the formation of space charge around crystal-glass interface, leading to a high value of effective dielectric constant especially for the samples heat-treated at higher temperatures. The fabricated glass nanocrystal composites exhibited P versus E hysteresis loops at room temperature and the remnant polarization (P-r) increased with the increase in crystallite size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A logic gate-based digital frequency multiplication technique for low-power frequency synthesis is presented. The proposed digital edge combining approach offers broadband operation with low-power and low-area advantages and is a promising candidate for low-power frequency synthesis in deep submicrometer CMOS technologies. Chip prototype of the proposed frequency multiplication-based 2.4-GHz binary frequency-shift-keying (BFSK)/amplitude shift keying (ASK) transmitter (TX) was fabricated in 0.13-mu m CMOS technology. The TX achieves maximum data rates of 3 and 20 Mb/s for BFSK and ASK modulations, respectively, consuming a 14-mA current from 1.3 V supply voltage. The corresponding energy efficiencies of the TX are 3.6 nJ/bit for BFSK and 0.91 nJ/bit for ASK modulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current state of the art, it remains an open problem to detect damage with partial ultrasonic scan data and with measurements at coarser spatial scale when the location of damage is not known. In the present paper, a recent development of finite element based model reduction scheme in frequency domain that employs master degrees of freedom covering the surface scan region of interests is reported in context of non-contact ultrasonic guided wave based inspection. The surface scan region of interest is grouped into master and slave degrees of freedom. A finite element wise damage factor is derived which represents damage state over distributed areas or sharp condition of inter-element boundaries (for crack). Laser Doppler Vibrometer (LDV) scan data obtained from plate type structure with inaccessible surface line crack are considered along with the developed reduced order damage model to analyze the extent of scan data dimensional reduction. The proposed technique has useful application in problems where non-contact monitoring of complex structural parts are extremely important and at the same time LDV scan has to be done on accessible surfaces only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Index-flood related regional frequency analysis (RFA) procedures are in use by hydrologists to estimate design quantiles of hydrological extreme events at data sparse/ungauged locations in river basins. There is a dearth of attempts to establish which among those procedures is better for RFA in the L-moment framework. This paper evaluates the performance of the conventional index flood (CIF), the logarithmic index flood (LIF), and two variants of the population index flood (PIF) procedures in estimating flood quantiles for ungauged locations by Monte Carlo simulation experiments and a case study on watersheds in Indiana in the U.S. To evaluate the PIF procedure, L-moment formulations are developed for implementing the procedure in situations where the regional frequency distribution (RFD) is the generalized logistic (GLO), generalized Pareto (GPA), generalized normal (GNO) or Pearson type III (PE3), as those formulations are unavailable. Results indicate that one of the variants of the PIF procedure, which utilizes the regional information on the first two L-moments is more effective than the CIF and LIF procedures. The improvement in quantile estimation using the variant of PIF procedure as compared with the CIF procedure is significant when the RFD is a generalized extreme value, GLO, GNO, or PE3, and marginal when it is GPA. (C) 2015 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided.