963 resultados para Ratio Test Integer Aperture (RTIA)
Resumo:
All signals that appear to be periodic have some sort of variability from period to period regardless of how stable they appear to be in a data plot. A true sinusoidal time series is a deterministic function of time that never changes and thus has zero bandwidth around the sinusoid's frequency. A zero bandwidth is impossible in nature since all signals have some intrinsic variability over time. Deterministic sinusoids are used to model cycles as a mathematical convenience. Hinich [IEEE J. Oceanic Eng. 25 (2) (2000) 256-261] introduced a parametric statistical model, called the randomly modulated periodicity (RMP) that allows one to capture the intrinsic variability of a cycle. As with a deterministic periodic signal the RMP can have a number of harmonics. The likelihood ratio test for this model when the amplitudes and phases are known is given in [M.J. Hinich, Signal Processing 83 (2003) 1349-13521. A method for detecting a RMP whose amplitudes and phases are unknown random process plus a stationary noise process is addressed in this paper. The only assumption on the additive noise is that it has finite dependence and finite moments. Using simulations based on a simple RMP model we show a case where the new method can detect the signal when the signal is not detectable in a standard waterfall spectrograrn display. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
A procedure for calculating critical level and power of likelihood ratio test, based on a Monte-Carlo simulation method is proposed. General principles of software building for its realization are given. Some examples of its application are shown.
Resumo:
2010 Mathematics Subject Classification: 65D18.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
The time series analysis has played an increasingly important role in weather and climate studies. The success of these studies depends crucially on the knowledge of the quality of climate data such as, for instance, air temperature and rainfall data. For this reason, one of the main challenges for the researchers in this field is to obtain homogeneous series. A time series of climate data is considered homogeneous when the values of the observed data can change only due to climatic factors, i.e., without any interference from external non-climatic factors. Such non-climatic factors may produce undesirable effects in the time series, as unrealistic homogeneity breaks, trends and jumps. In the present work it was investigated climatic time series for the city of Natal, RN, namely air temperature and rainfall time series, for the period spanning from 1961 to 2012. The main purpose was to carry out an analysis in order to check the occurrence of homogeneity breaks or trends in the series under investigation. To this purpose, it was applied some basic statistical procedures, such as normality and independence tests. The occurrence of trends was investigated by linear regression analysis, as well as by the Spearman and Mann-Kendall tests. The homogeneity was investigated by the SNHT, as well as by the Easterling-Peterson and Mann-Whitney-Pettit tests. Analyzes with respect to normality showed divergence in their results. The von Neumann ratio test showed that in the case of the air temperature series the data are not independent and identically distributed (iid), whereas for the rainfall series the data are iid. According to the applied testings, both series display trends. The mean air temperature series displays an increasing trend, whereas the rainfall series shows an decreasing trend. Finally, the homogeneity tests revealed that all series under investigations present inhomogeneities, although they breaks depend on the applied test. In summary, the results showed that the chosen techniques may be applied in order to verify how well the studied time series are characterized. Therefore, these results should be used as a guide for further investigations about the statistical climatology of Natal or even of any other place.
Resumo:
The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Context. In February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z=0.212) in flaring state at very-high energy (VHE, E>100GeV). The flux reached a level more than 10 times higher than any previously recorded flaring state of the source. Aims. Description of the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim at detecting the imprint of the Extragalactic Background Light (EBL) in the VHE spectrum of the source, in order to constrain its intensity in the optical band. Methods. We analyzed the gamma-ray data from the MAGIC telescopes using the standard MAGIC software for the production of the light curve and the spectra. For the constraining of the EBL we implement the method developed by the H.E.S.S. collaboration in which the intrinsic energy spectrum of the source is modeled with a simple function (< 4 parameters), and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. Results. The collected data allowed us to describe the flux changes night by night and also to produce di_erential energy spectra for all nights of the observed period. The estimated intrinsic spectra of all the nights could be fitted by power-law functions. Evaluating the changes in the fit parameters we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1:07 (-0.20,+0.24)stat+sys, relative to the one in the tested EBL template (Domínguez et al. 2011), is preferred at the 4:6 σ level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 μm,4.25 μm], with a peak value at 1.4 μm of λF_ = 12:27^(+2:75)_ (-2:29) nW m^(-2) sr^(-1), including systematics.
Resumo:
BACKGROUND: The genetic basis of hearing loss in humans is relatively poorly understood. In recent years, experimental approaches including laboratory studies of early onset hearing loss in inbred mouse strains, or proteomic analyses of hair cells or hair bundles, have suggested new candidate molecules involved in hearing function. However, the relevance of these genes/gene products to hearing function in humans remains unknown. We investigated whether single nucleotide polymorphisms (SNPs) in the human orthologues of genes of interest arising from the above-mentioned studies correlate with hearing function in children. METHODS: 577 SNPs from 13 genes were each analysed by linear regression against averaged high (3, 4 and 8 kHz) or low frequency (0.5, 1 and 2 kHz) audiometry data from 4970 children in the Avon Longitudinal Study of Parents and Children (ALSPAC) birth-cohort at age eleven years. Genes found to contain SNPs with low p-values were then investigated in 3417 adults in the G-EAR study of hearing. RESULTS: Genotypic data were available in ALSPAC for a total of 577 SNPs from 13 genes of interest. Two SNPs approached sample-wide significance (pre-specified at p = 0.00014): rs12959910 in CBP80/20-dependent translation initiation factor (CTIF) for averaged high frequency hearing (p = 0.00079, β = 0.61 dB per minor allele); and rs10492452 in L-plastin (LCP1) for averaged low frequency hearing (p = 0.00056, β = 0.45 dB). For low frequencies, rs9567638 in LCP1 also enhanced hearing in females (p = 0.0011, β = -1.76 dB; males p = 0.23, β = 0.61 dB, likelihood-ratio test p = 0.006). SNPs in LCP1 and CTIF were then examined against low and high frequency hearing data for adults in G-EAR. Although the ALSPAC results were not replicated, a SNP in LCP1, rs17601960, is in strong LD with rs9967638, and was associated with enhanced low frequency hearing in adult females in G-EAR (p = 0.00084). CONCLUSIONS: There was evidence to suggest that multiple SNPs in CTIF may contribute a small detrimental effect to hearing, and that a sex-specific locus in LCP1 is protective of hearing. No individual SNPs reached sample-wide significance in both ALSPAC and G-EAR. This is the first report of a possible association between LCP1 and hearing function.
Resumo:
By definition, the domestication process leads to an overall reduction of crop genetic diversity. This lead to the current search of genomic regions in wild crop relatives (CWR), an important task for modern carrot breeding. Nowadays massive sequencing possibilities can allow for discovery of novel genetic resources in wild populations, but this quest could be aided by the use of a surrogate gene (to first identify and prioritize novel wild populations for increased sequencing effort). Alternative oxidase (AOX) gene family seems to be linked to all kinds of abiotic and biotic stress reactions in various organisms and thus have the potential to be used in the identification of CWR hotspots of environment-adapted diversity. High variability of DcAOX1 was found in populations of wild carrot sampled across a West-European environmental gradient. Even though no direct relation was found with the analyzed climatic conditions or with physical distance, population differentiation exists and results mainly from the polymorphisms associated with DcAOX1 exon 1 and intron 1. The relatively high number of amino acid changes and the identification of several unusually variable positions (through a likelihood ratio test), suggests that DcAOX1 gene might be under positive selection. However, if positive selection is considered, it only acts on some specific populations (i.e. is in the form of adaptive differences in different population locations) given the observed high genetic diversity. We were able to identify two populations with higher levels of differentiation which are promising as hot spots of specific functional diversity.
Resumo:
When noises considerations are made, nonredundant arrays (NRAs) are endowed with many advantages which other arrays e.g., uniformly redundant arrays (URAs) do not possess in applications of coded aperture imaging. However, lower aperture opening ratio limits the applications of NRA in practice. In this paper, we present a computer searching method based on a global optimization algorithm named DIRECT to design NRAs. Compared with the existing NRAs e.g., Golay's NRAs, which are well known and widely used in various applications, NRAs found by our method have higher aperture opening ratio and auto correlation compression ratio. These advantages make our aperture arrays be very useful for practical applications especially for which of aperture size are limited. Here, we also present some aperture arrays we found. These aperture arrays have an interesting property that they belong to both NRA and URA. (C) 2006 Elsevier GmbH. All rights reserved.
Resumo:
Epidemiological evidence shows that a diet high in monounsaturated fatty acids (MUFA) but low in saturated fatty acids (SFA) is associated with reduced risk of CHD. The hypocholesterolaemic effect of MUFA is known but there has been little research on the effect of test meal MUFA and SFA composition on postprandial lipid metabolism. The present study investigated the effect of meals containing different proportions of MUFA and SFA on postprandial triacylglycerol and non-esterified fatty acid (NEFA) metabolism. Thirty healthy male volunteers consumed three meals containing equal amounts of fat (40 g), but different proportions of MUFA (12, 17 and 24% energy) in random order. Postprandial plasma triacylglycerol, apolipoprotein B-48, cholesterol, HDL-cholesterol, glucose and insulin concentrations and lipoprotein lipase (EC 3.1.1.34) activity were not significantly different following the three meals which varied in their levels of SFA and MUFA. There was a significant difference in the postprandial NEFA response between meals. The incremental area under the curve of postprandial plasma NEFA concentrations was significantly (P = 0.03) lower following the high-MUFA meal. Regression analysis showed that the non-significant difference in fasting NEFA concentrations was the most important factor determining difference between meals, and that the test meal MUFA content had only a minor effect. In conclusion, varying the levels of MUFA and SFA in test meals has little or no effect on postprandial lipid metabolism.
Resumo:
The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p < 0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p > 0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p <0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p < 0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. (C) 2014 Elsevier Ltd. All rights reserved.