16 resultados para instantaneous frequency estimation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Analyses of high-density single-nucleotide polymorphism (SNP) data, such as genetic mapping and linkage disequilibrium (LD) studies, require phase-known haplotypes to allow for the correlation between tightly linked loci. However, current SNP genotyping technology cannot determine phase, which must be inferred statistically. In this paper, we present a new Bayesian Markov chain Monte Carlo (MCMC) algorithm for population haplotype frequency estimation, particulary in the context of LD assessment. The novel feature of the method is the incorporation of a log-linear prior model for population haplotype frequencies. We present simulations to suggest that 1) the log-linear prior model is more appropriate than the standard coalescent process in the presence of recombination (>0.02cM between adjacent loci), and 2) there is substantial inflation in measures of LD obtained by a "two-stage" approach to the analysis by treating the "best" haplotype configuration as correct, without regard to uncertainty in the recombination process. Genet Epidemiol 25:106-114, 2003. (C) 2003 Wiley-Liss, Inc.
Resumo:
This paper introduces the Hilbert Analysis (HA), which is a novel digital signal processing technique, for the investigation of tremor. The HA is formed by two complementary tools, i.e. the Empirical Mode Decomposition (EMD) and the Hilbert Spectrum (HS). In this work we show that the EMD can automatically detect and isolate tremulous and voluntary movements from experimental signals collected from 31 patients with different conditions. Our results also suggest that the tremor may be described by a new class of mathematical functions defined in the HA framework. In a further study, the HS was employed for visualization of the energy activities of signals. This tool introduces the concept of instantaneous frequency in the field of tremor. In addition, it could provide, in a time-frequency-energy plot, a clear visualization of local activities of tremor energy over the time. The HA demonstrated to be very useful to perform objective measurements of any kind of tremor and can therefore be used to perform functional assessment.
Resumo:
This paper introduces the Hilbert Analysis (HA), which is a novel digital signal processing technique, for the investigation of tremor. The HA is formed by two complementary tools, i.e. the Empirical Mode Decomposition (EMD) and the Hilbert Spectrum (HS). In this work we show that the EMD can automatically detect and isolate tremulous and voluntary movements from experimental signals collected from 31 patients with different conditions. Our results also suggest that the tremor may be described by a new class of mathematical functions defined in the HA framework. In a further study, the HS was employed for visualization of the energy activities of signals. This tool introduces the concept of instantaneous frequency in the field of tremor. In addition, it could provide, in a time-frequency energy plot, a clear visualization of local activities of tremor energy over the time. The HA demonstrated to be very useful to perform objective measurements of any kind of tremor and can therefore be used to perform functional assessment.
Resumo:
Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Background: Dietary assessment methods are important tools for nutrition research. Online dietary assessment tools have the potential to become invaluable methods of assessing dietary intake because, compared with traditional methods, they have many advantages including the automatic storage of input data and the immediate generation of nutritional outputs. Objective: The aim of this study was to develop an online food frequency questionnaire (FFQ) for dietary data collection in the “Food4Me” study and to compare this with the validated European Prospective Investigation of Cancer (EPIC) Norfolk printed FFQ. Methods: The Food4Me FFQ used in this analysis was developed to consist of 157 food items. Standardized color photographs were incorporated in the development of the Food4Me FFQ to facilitate accurate quantification of the portion size of each food item. Participants were recruited in two centers (Dublin, Ireland and Reading, United Kingdom) and each received the online Food4Me FFQ and the printed EPIC-Norfolk FFQ in random order. Participants completed the Food4Me FFQ online and, for most food items, participants were requested to choose their usual serving size among seven possibilities from a range of portion size pictures. The level of agreement between the two methods was evaluated for both nutrient and food group intakes using the Bland and Altman method and classification into quartiles of daily intake. Correlations were calculated for nutrient and food group intakes. Results: A total of 113 participants were recruited with a mean age of 30 (SD 10) years (40.7% male, 46/113; 59.3%, 67/113 female). Cross-classification into exact plus adjacent quartiles ranged from 77% to 97% at the nutrient level and 77% to 99% at the food group level. Agreement at the nutrient level was highest for alcohol (97%) and lowest for percent energy from polyunsaturated fatty acids (77%). Crude unadjusted correlations for nutrients ranged between .43 and .86. Agreement at the food group level was highest for “other fruits” (eg, apples, pears, oranges) and lowest for “cakes, pastries, and buns”. For food groups, correlations ranged between .41 and .90. Conclusions: The results demonstrate that the online Food4Me FFQ has good agreement with the validated printed EPIC-Norfolk FFQ for assessing both nutrient and food group intakes, rendering it a useful tool for ranking individuals based on nutrient and food group intakes.
Resumo:
The equations of Milsom are evaluated, giving the ground range and group delay of radio waves propagated via the horizontally stratified model ionosphere proposed by Bradley and Dudeney. Expressions for the ground range which allow for the effects of the underlying E- and F1-regions are used to evaluate the basic maximum usable frequency or M-factors for single F-layer hops. An algorithm for the rapid calculation of the M-factor at a given range is developed, and shown to be accurate to within 5%. The results reveal that the M(3000)F2-factor scaled from vertical-incidence ionograms using the standard URSI procedure can be up to 7.5% in error. A simple addition to the algorithm effects a correction to ionogram values to make these accurate to 0.5%.
Resumo:
Background: Advances in nutritional assessment are continuing to embrace developments in computer technology. The online Food4Me food frequency questionnaire (FFQ) was created as an electronic system for the collection of nutrient intake data. To ensure its accuracy in assessing both nutrient and food group intake, further validation against data obtained using a reliable, but independent, instrument and assessment of its reproducibility are required. Objective: The aim was to assess the reproducibility and validity of the Food4Me FFQ against a 4-day weighed food record (WFR). Methods: Reproducibility of the Food4Me FFQ was assessed using test-retest methodology by asking participants to complete the FFQ on 2 occasions 4 weeks apart. To assess the validity of the Food4Me FFQ against the 4-day WFR, half the participants were also asked to complete a 4-day WFR 1 week after the first administration of the Food4Me FFQ. Level of agreement between nutrient and food group intakes estimated by the repeated Food4Me FFQ and the Food4Me FFQ and 4-day WFR were evaluated using Bland-Altman methodology and classification into quartiles of daily intake. Crude unadjusted correlation coefficients were also calculated for nutrient and food group intakes. Results: In total, 100 people participated in the assessment of reproducibility (mean age 32, SD 12 years), and 49 of these (mean age 27, SD 8 years) also took part in the assessment of validity. Crude unadjusted correlations for repeated Food4Me FFQ ranged from .65 (vitamin D) to .90 (alcohol). The mean cross-classification into “exact agreement plus adjacent” was 92% for both nutrient and food group intakes, and Bland-Altman plots showed good agreement for energy-adjusted macronutrient intakes. Agreement between the Food4Me FFQ and 4-day WFR varied, with crude unadjusted correlations ranging from .23 (vitamin D) to .65 (protein, % total energy) for nutrient intakes and .11 (soups, sauces and miscellaneous foods) to .73 (yogurts) for food group intake. The mean cross-classification into “exact agreement plus adjacent” was 80% and 78% for nutrient and food group intake, respectively. There were no significant differences between energy intakes estimated using the Food4Me FFQ and 4-day WFR, and Bland-Altman plots showed good agreement for both energy and energy-controlled nutrient intakes. Conclusions: The results demonstrate that the online Food4Me FFQ is reproducible for assessing nutrient and food group intake and has moderate agreement with the 4-day WFR for assessing energy and energy-adjusted nutrient intakes. The Food4Me FFQ is a suitable online tool for assessing dietary intake in healthy adults.
Resumo:
This article introduces a new general method for genealogical inference that samples independent genealogical histories using importance sampling (IS) and then samples other parameters with Markov chain Monte Carlo (MCMC). It is then possible to more easily utilize the advantages of importance sampling in a fully Bayesian framework. The method is applied to the problem of estimating recent changes in effective population size from temporally spaced gene frequency data. The method gives the posterior distribution of effective population size at the time of the oldest sample and at the time of the most recent sample, assuming a model of exponential growth or decline during the interval. The effect of changes in number of alleles, number of loci, and sample size on the accuracy of the method is described using test simulations, and it is concluded that these have an approximately equivalent effect. The method is used on three example data sets and problems in interpreting the posterior densities are highlighted and discussed.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.
Resumo:
Little has been reported on the performance of near-far resistant CDMA detectors in the presence of system parameter estimation errors (SPEEs). Starting with the general mathematical model of matched filters, the paper examines the effects of three classes of SPEEs, i.e., time-delay, carrier phase, and carrier frequency errors, on the performance (BER) of an emerging type of near-far resistant coherent DS/SSMA detector, i.e., the linear decorrelating detector. For comparison, the corresponding results for the conventional detector are also presented. It is shown that the linear decorrelating detector can still maintain a considerable performance advantage over the conventional detector even when some SPEEs exist.
Resumo:
A particle filter is a data assimilation scheme that employs a fully nonlinear, non-Gaussian analysis step. Unfortunately as the size of the state grows the number of ensemble members required for the particle filter to converge to the true solution increases exponentially. To overcome this Vaswani [Vaswani N. 2008. IEEE Trans Signal Process 56:4583–97] proposed a new method known as mode tracking to improve the efficiency of the particle filter. When mode tracking, the state is split into two subspaces. One subspace is forecast using the particle filter, the other is treated so that its values are set equal to the mode of the marginal pdf. There are many ways to split the state. One hypothesis is that the best results should be obtained from the particle filter with mode tracking when we mode track the maximum number of unimodal dimensions. The aim of this paper is to test this hypothesis using the three dimensional stochastic Lorenz equations with direct observations. It is found that mode tracking the maximum number of unimodal dimensions does not always provide the best result. The best choice of states to mode track depends on the number of particles used and the accuracy and frequency of the observations.
Resumo:
We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.
Resumo:
We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
A statistical–dynamical regionalization approach is developed to assess possible changes in wind storm impacts. The method is applied to North Rhine-Westphalia (Western Germany) using the FOOT3DK mesoscale model for dynamical downscaling and ECHAM5/OM1 global circulation model climate projections. The method first classifies typical weather developments within the reanalysis period using K-means cluster algorithm. Most historical wind storms are associated with four weather developments (primary storm-clusters). Mesoscale simulations are performed for representative elements for all clusters to derive regional wind climatology. Additionally, 28 historical storms affecting Western Germany are simulated. Empirical functions are estimated to relate wind gust fields and insured losses. Transient ECHAM5/OM1 simulations show an enhanced frequency of primary storm-clusters and storms for 2060–2100 compared to 1960–2000. Accordingly, wind gusts increase over Western Germany, reaching locally +5% for 98th wind gust percentiles (A2-scenario). Consequently, storm losses are expected to increase substantially (+8% for A1B-scenario, +19% for A2-scenario). Regional patterns show larger changes over north-eastern parts of North Rhine-Westphalia than for western parts. For storms with return periods above 20 yr, loss expectations for Germany may increase by a factor of 2. These results document the method's functionality to assess future changes in loss potentials in regional terms.