946 resultados para Thompson sampling
Resumo:
Female mate choice decisions, which influence sexual selection, involve complex interactions between the 2 sexes and the environment. Theoretical models predict that male movement and spacing in the field should influence female sampling tactics, and in turn, females should drive the evolution of male movement and spacing to sample them optimally. Theoretically, simultaneous sampling of males using the best-of-n or comparative Bayes strategy should yield maximum mating benefits to females. We examined the ecological context of female mate sampling based on acoustic signals in the tree cricket Oecanthus henryi to determine whether the conditions for such optimal strategies were met in the field. These strategies involve recall of the quality and location of individual males, which in turn requires male positions to be stable within a night. Calling males rarely moved within a night, potentially enabling female sampling strategies that require recall. To examine the possibility of simultaneous acoustic sampling of males, we estimated male acoustic active spaces using information on male spacing, call transmission, and female hearing threshold. Males were found to be spaced far apart, and active space overlap was rare. We then examined female sampling scenarios by studying female spacing relative to male acoustic active spaces. Only 15% of sampled females could hear multiple males, suggesting that simultaneous mate sampling is rare in the field. Moreover, the relatively large distances between calling males suggest high search costs, which may favor threshold strategies that do not require memory.
Resumo:
Although uncertainties in material properties have been addressed in the design of flexible pavements, most current modeling techniques assume that pavement layers are homogeneous. The paper addresses the influence of the spatial variability of the resilient moduli of pavement layers by evaluating the effect of the variance and correlation length on the pavement responses to loading. The integration of the spatially varying log-normal random field with the finite-difference method has been achieved through an exponential autocorrelation function. The variation in the correlation length was found to have a marginal effect on the mean values of the critical strains and a noticeable effect on the standard deviation which decreases with decreases in correlation length. This reduction in the variance arises because of the spatial averaging phenomenon over the softer and stiffer zones generated because of spatial variability. The increase in the mean value of critical strains with decreasing correlation length, although minor, illustrates that pavement performance is adversely affected by the presence of spatially varying layers. The study also confirmed that the higher the variability in the pavement layer moduli, introduced through a higher value of coefficient of variation (COV), the higher the variability in the pavement response. The study concludes that ignoring spatial variability by modeling the pavement layers as homogeneous that have very short correlation lengths can result in the underestimation of the critical strains and thus an inaccurate assessment of the pavement performance. (C) 2014 American Society of Civil Engineers.
Resumo:
Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.
Resumo:
Structural information over the entire course of binding interactions based on the analyses of energy landscapes is described, which provides a framework to understand the events involved during biomolecular recognition. Conformational dynamics of malectin's exquisite selectivity for diglucosylated N-glycan (Dig-N-glycan), a highly flexible oligosaccharide comprising of numerous dihedral torsion angles, are described as an example. For this purpose, a novel approach based on hierarchical sampling for acquiring metastable molecular conformations constituting low-energy minima for understanding the structural features involved in a biologic recognition is proposed. For this purpose, four variants of principal component analysis were employed recursively in both Cartesian space and dihedral angles space that are characterized by free energy landscapes to select the most stable conformational substates. Subsequently, k-means clustering algorithm was implemented for geometric separation of the major native state to acquire a final ensemble of metastable conformers. A comparison of malectin complexes was then performed to characterize their conformational properties. Analyses of stereochemical metrics and other concerted binding events revealed surface complementarity, cooperative and bidentate hydrogen bonds, water-mediated hydrogen bonds, carbohydrate-aromatic interactions including CH-pi and stacking interactions involved in this recognition. Additionally, a striking structural transition from loop to beta-strands in malectin CRD upon specific binding to Dig-N-glycan is observed. The interplay of the above-mentioned binding events in malectin and Dig-N-glycan supports an extended conformational selection model as the underlying binding mechanism.
Resumo:
Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry.
Resumo:
Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Flexray is a high speed communication protocol designed for distributive control in automotive control applications. Control performance not only depends on the control algorithm but also on the scheduling constraints in communication. A balance between the control performance and communication constraints must required for the choice of the sampling rates of the control loops in a node. In this paper, an optimum sampling period of control loops to minimize the cost function, satisfying the scheduling constraints is obtained. An algorithm to obtain the delay in service of each task in a node of the control loop in the hyper period has been also developed. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Changes in the protonation and deprotonation of amino acid residues in proteins play a key role in many biological processes and pathways. Here, we report calculations of the free-energy profile for the protonation deprotonation reaction of the 20 canonical alpha amino acids in aqueous solutions using ab initio Car-Parrinello molecular dynamics simulations coupled with metad-ynamics sampling. We show here that the calculated change in free energy of the dissociation reaction provides estimates of the multiple pK(a) values of the amino acids that are in good agreement with experiment. We use the bond-length-dependent number of the protons coordinated to the hydroxyl oxygen of the carboxylic and the amine groups as the collective variables to explore the free-energy profiles of the Bronsted acid-base chemistry of amino acids in aqueous solutions. We ensure that the amino acid undergoing dissociation is solvated by at least three hydrations shells with all water molecules included in the simulations. The method works equally well for amino acids with neutral, acidic and basic side chains and provides estimates of the multiple pK(a) values with a mean relative error, with respect to experimental results, of 0.2 pK(a) units.
Resumo:
We propose data acquisition from continuous-time signals belonging to the class of real-valued trigonometric polynomials using an event-triggered sampling paradigm. The sampling schemes proposed are: level crossing (LC), close to extrema LC, and extrema sampling. Analysis of robustness of these schemes to jitter, and bandpass additive gaussian noise is presented. In general these sampling schemes will result in non-uniformly spaced sample instants. We address the issue of signal reconstruction from the acquired data-set by imposing structure of sparsity on the signal model to circumvent the problem of gap and density constraints. The recovery performance is contrasted amongst the various schemes and with random sampling scheme. In the proposed approach, both sampling and reconstruction are non-linear operations, and in contrast to random sampling methodologies proposed in compressive sensing these techniques may be implemented in practice with low-power circuitry.
Resumo:
Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
The inner ear has been shown to characterize an acoustic stimuli by transducing fluid motion in the inner ear to mechanical bending of stereocilia on the inner hair cells (IHCs). The excitation motion/energy transferred to an IHC is dependent on the frequency spectrum of the acoustic stimuli, and the spatial location of the IHC along the length of the basilar membrane (BM). Subsequently, the afferent auditory nerve fiber (ANF) bundle samples the encoded waveform in the IHCs by synapsing with them. In this work we focus on sampling of information by afferent ANFs from the IHCs, and show computationally that sampling at specific time instants is sufficient for decoding of time-varying acoustic spectrum embedded in the acoustic stimuli. The approach is based on sampling the signal at its zero-crossings and higher-order derivative zero-crossings. We show results of the approach on time-varying acoustic spectrum estimation from cricket call signal recording. The framework gives a time-domain and non-spatial processing perspective to auditory signal processing. The approach works on the full band signal, and is devoid of modeling any bandpass filtering mimicking the BM action. Instead, we motivate the approach from the perspective of event-triggered sampling by afferent ANFs on the stimuli encoded in the IHCs. Though the approach gives acoustic spectrum estimation but it is shallow on its complete understanding for plausible bio-mechanical replication with current mammalian auditory mechanics insights.