969 resultados para Dispersive Estimates
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.
Resumo:
Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.
Resumo:
This CDROM includes PDFs of presentations on the following topics: "TXDOT Revenue and Expenditure Trends;" "Examine Highway Fund Diversions, & Benchmark Texas Vehicle Registration Fees;" "Evaluation of the JACK Model;" "Future highway construction cost trends;" "Fuel Efficiency Trends and Revenue Impact"
Resumo:
This paper establishes sufficient conditions to bound the error in perturbed conditional mean estimates derived from a perturbed model (only the scalar case is shown in this paper but a similar result is expected to hold for the vector case). The results established here extend recent stability results on approximating information state filter recursions to stability results on the approximate conditional mean estimates. The presented filter stability results provide bounds for a wide variety of model error situations.
Resumo:
Background: National physical activity data suggest that there is a considerable difference in physical activity levels of US and Australian adults. Although different surveys (Active Australia and BRFSS) are used, the questions are similar. Different protocols, however, are used to estimate “activity” from the data collected. The primary aim of this study was to assess whether the 2 approaches to the management of PA data could explain some of the difference in prevalence estimates derived from the two national surveys. Methods: Secondary data analysis of the most recent AA survey (N = 2987). Results: 15% of the sample was defined as “active” using Australian criteria but as “inactive” using the BRFSS protocol, even though weekly energy expenditure was commensurate with meeting current guidelines. Younger respondents (age < 45 y) were more likely to be “misclassified” using the BRFSS criteria. Conclusions: The prevalence of activity in Australia and the US appears to be more similar than we had previously thought.
Resumo:
During a major flood event, the inundation of urban environments leads to some complicated flow motion most often associated with significant sediment fluxes. In the present study, a series of field measurements were conducted in an inundated section of the City of Brisbane (Australia) about the peak of a major flood in January 2011. Some experiments were performed to use ADV backscatter amplitude as a surrogate estimate of the suspended sediment concentration (SSC) during the flood event. The flood water deposit samples were predominantly silty material with a median particle size about 25 μm and they exhibited a non-Newtonian behavior under rheological testing. In the inundated urban environment during the flood, estimates of suspended sediment concentration presented a general trend with increasing SSC for decreasing water depth. The suspended sediment flux data showed some substantial sediment flux amplitudes consistent with the murky appearance of floodwaters. Altogether the results highlighted the large suspended sediment loads and fluctuations in the inundated urban setting associated possibly with a non-Newtonian behavior. During the receding flood, some unusual long-period oscillations were observed (periods about 18 min), although the cause of these oscillations remains unknown. The field deployment was conducted in challenging conditions highlighting a number of practical issues during a natural disaster.