962 resultados para Hypothesis testing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Orthogonal frequency division multiplexing (OFDM) systems are more sensitive to carrier frequency offset (CFO) compared to the conventional single carrier systems. CFO destroys the orthogonality among subcarriers, resulting in inter-carrier interference (ICI) and degrading system performance. To mitigate the effect of the CFO, it has to be estimated and compensated before the demodulation. The CFO can be divided into an integer part and a fractional part. In this paper, we investigate a maximum-likelihood estimator (MLE) for estimating the integer part of the CFO in OFDM systems, which requires only one OFDM block as the pilot symbols. To reduce the computational complexity of the MLE and improve the bandwidth efficiency, a suboptimum estimator (Sub MLE) is studied. Based on the hypothesis testing method, a threshold Sub MLE (T-Sub MLE) is proposed to further reduce the computational complexity. The performance analysis of the proposed T-Sub MLE is obtained and the analytical results match the simulation results well. Numerical results show that the proposed estimators are effective and reliable in both additive white Gaussian noise (AWGN) and frequency-selective fading channels in OFDM systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modeling of on-body propagation channels is of paramount importance to those wishing to evaluate radio channel performance for wearable devices in body area networks (BANs). Difficulties in modeling arise due to the highly variable channel conditions related to changes in the user's state and local environment. This study characterizes these influences by using time-series analysis to examine and model signal characteristics for on-body radio channels in user stationary and mobile scenarios in four different locations: anechoic chamber, open office area, hallway, and outdoor environment. Autocorrelation and cross-correlation functions are reported and shown to be dependent on body state and surroundings. Autoregressive (AR) transfer functions are used to perform time-series analysis and develop models for fading in various on-body links. Due to the non-Gaussian nature of the logarithmically transformed observed signal envelope in the majority of mobile user states, a simple method for reproducing the failing based on lognormal and Nakagami statistics is proposed. The validity of the AR models is evaluated using hypothesis testing, which is based on the Ljung-Box statistic, and the estimated distributional parameters of the simulator output compared with those from experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we argue that it is often adaptive to use one’s background beliefs when interpreting information that, from a normative point of view, is incomplete. In both of the experiments reported here participants were presented with an item possessing two features and were asked to judge, in the light of some evidence concerning the features, to which of two categories it was more likely that the item belonged. It was found that when participants received evidence relevant to just one of these hypothesised categories (i.e. evidence that did not form a Bayesian likelihood ratio) they used their background beliefs to interpret this information. In Experiment 2, on the other hand, participants behaved in a broadly Bayesian manner when the evidence they received constituted a completed likelihood ratio. We discuss the circumstances under which participants, when making their judgements, consider the alternative hypothesis. We conclude with a discussion of the implications of our results for an understanding of hypothesis testing, belief revision, and categorisation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course

Relevância:

60.00% 60.00%

Publicador:

Resumo:

I draw attention to the need for ecologists to take spatial structure into account more seriously in hypothesis testing. If spatial autocorrelation is ignored, as it usually is, then analyses of ecological patterns in terms of environmental factors can produce very misleading results. This is demonstrated using synthetic but realistic spatial patterns with known spatial properties which are subjected to classical correlation and multiple regression analyses. Correlation between an autocorrelated response variable and each of a set of explanatory variables is strongly biased in favour of those explanatory variables that are highly autocorrelated - the expected magnitude of the correlation coefficient increases with autocorrelation even if the spatial patterns are completely independent. Similarly, multiple regression analysis finds highly autocorrelated explanatory variables "significant" much more frequently than it should. The chances of mistakenly identifying a "significant" slope across an autocorrelated pattern is very high if classical regression is used. Consequently, under these circumstances strongly autocorrelated environmental factors reported in the literature as associated with ecological patterns may not actually be significant. It is likely that these factors wrongly described as important constitute a red-shifted subset of the set of potential explanations, and that more spatially discontinuous factors (those with bluer spectra) are actually relatively more important than their present status suggests. There is much that ecologists can do to improve on this situation. I discuss various approaches to the problem of spatial autocorrelation from the literature and present a randomisation test for the association of two spatial patterns which has advantages over currently available methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about cues that predict positively is aided by automatic cognitive processes, whereas learning about cues that predict negatively is especially demanding on controlled attention and hypothesis testing processes. In the studies reported here, negative, but not positive cue learning related to individual differences in working memory capacity both on measures of overall judgment performance and modelling of the implicit learning process. However, the introduction of a novel method to monitor participants' explicit beliefs about a set of cues on a trial-by-trial basis revealed that participants were engaged in explicit hypothesis testing about positive and negative cues, and explicit beliefs about both types of cues were linked to working memory capacity. Taken together, our results indicate that while people are engaged in explicit hypothesis testing during cue learning, explicit beliefs are applied to judgment only when cues are negative. © 2012 Elsevier Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mollusks are the most morphologically disparate living animal phylum, they have diversified into all habitats, and have a deep fossil record. Monophyly and identity of their eight living classes is undisputed, but relationships between these groups and patterns of their early radiation have remained elusive. Arguments about traditional morphological phylogeny focus on a small number of topological concepts but often without regard to proximity of the individual classes. In contrast, molecular studies have proposed a number of radically different, inherently contradictory, and controversial sister relationships. Here, we assembled a dataset of 42 unique published trees describing molluscan interrelationships. We used these data to ask several questions about the state of resolution of molluscan phylogeny compared to a null model of the variation possible in random trees constructed from a monophyletic assemblage of eight terminals. Although 27 different unique trees have been proposed from morphological inference, the majority of these are not statistically different from each other. Within the available molecular topologies, only four studies to date have included the deep-sea class Monoplacophora; but 36.4% of all trees are not significantly different. We also present supertrees derived from 2 data partitions and 3 methods, including all available molecular molluscan phylogenies, which will form the basis for future hypothesis testing. The supertrees presented here were not constructed to provide yet another hypothesis of molluscan relationships, but rather to algorithmically evaluate the relationships present in the disparate published topologies. Based on the totality of available evidence, certain patterns of relatedness among constituent taxa become clear. The internodal distance is consistently short between a few taxon pairs, particularly supporting the relatedness of Monoplacophora and the chitons, Polyplacophora. Other taxon pairs are rarely or never found in close proximity, such as the vermiform Caudofoveata and Bivalvia. Our results have specific utility for guiding constructive research planning in order to better test relationships in Mollusca as well as other problematic groups. Taxa with consistently proximate relationships should be the focus of a combined approach in a concerted assessment of potential genetic and anatomical homology, while unequivocally distant taxa will make the most constructive choices for exemplar selection in higher-level phylogenomic analyses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An investigation into exchange-traded fund (ETF) outperforrnance during the period 2008-2012 is undertaken utilizing a data set of 288 U.S. traded securities. ETFs are tested for net asset value (NAV) premium, underlying index and market benchmark outperformance, with Sharpe, Treynor, and Sortino ratios employed as risk-adjusted performance measures. A key contribution is the application of an innovative generalized stepdown procedure in controlling for data snooping bias. We find that a large proportion of optimized replication and debt asset class ETFs display risk-adjusted premiums with energy and precious metals focused funds outperforming the S&P 500 market benchmark. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Repeated recolonization of freshwater environments following Pleistocene glaciations has played a major role in the evolution and adaptation of anadromous taxa. Located at the western fringe of Europe, Ireland and Britain were likely recolonized rapidly by anadromous fishes from the North Atlantic following the last glacial maximum (LGM). While the presence of unique mitochondrial haplotypes in Ireland suggests that a cryptic northern refugium may have played a role in recolonization, no explicit test of this hypothesis has been conducted. The three-spined stickleback is native and ubiquitous to aquatic ecosystems throughout Ireland, making it an excellent model species with which to examine the biogeographical history of anadromous fishes in the region. We used mitochondrial and microsatellite markers to examine the presence of divergent evolutionary lineages and to assess broad-scale patterns of geographical clustering among postglacially isolated populations. Our results confirm that Ireland is a region of secondary contact for divergent mitochondrial lineages and that endemic haplotypes occur in populations in Central and Southern Ireland. To test whether a putative Irish lineage arose from a cryptic Irish refugium, we used approximate Bayesian computation (ABC). However, we found no support for this hypothesis. Instead, the Irish lineage likely diverged from the European lineage as a result of postglacial isolation of freshwater populations by rising sea levels. These findings emphasize the need to rigorously test biogeographical hypothesis and contribute further evidence that postglacial processes may have shaped genetic diversity in temperate fauna.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer vision for realtime applications requires tremendous computational power because all images must be processed from the first to the last pixel. Ac tive vision by probing specific objects on the basis of already acquired context may lead to a significant reduction of processing. This idea is based on a few concepts from our visual cortex (Rensink, Visual Cogn. 7, 17-42, 2000): (1) our physical surround can be seen as memory, i.e. there is no need to construct detailed and complete maps, (2) the bandwidth of the what and where systems is limited, i.e. only one object can be probed at any time, and (3) bottom-up, low-level feature extraction is complemented by top-down hypothesis testing, i.e. there is a rapid convergence of activities in dendritic/axonal connections.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Activity of the medial frontal cortex (MFC) has been implicated in attention regulation and performance monitoring. The MFC is thought to generate several event-related potential (ERPs) components, known as medial frontal negativities (MFNs), that are elicited when a behavioural response becomes difficult to control (e.g., following an error or shifting from a frequently executed response). The functional significance of MFNs has traditionally been interpreted in the context of the paradigm used to elicit a specific response, such as errors. In a series of studies, we consider the functional similarity of multiple MFC brain responses by designing novel performance monitoring tasks and exploiting advanced methods for electroencephalography (EEG) signal processing and robust estimation statistics for hypothesis testing. In study 1, we designed a response cueing task and used Independent Component Analysis (ICA) to show that the latent factors describing a MFN to stimuli that cued the potential need to inhibit a response on upcoming trials also accounted for medial frontal brain responses that occurred when individuals made a mistake or inhibited an incorrect response. It was also found that increases in theta occurred to each of these task events, and that the effects were evident at the group level and in single cases. In study 2, we replicated our method of classifying MFC activity to cues in our response task and showed again, using additional tasks, that error commission, response inhibition, and, to a lesser extent, the processing of performance feedback all elicited similar changes across MFNs and theta power. In the final study, we converted our response cueing paradigm into a saccade cueing task in order to examine the oscillatory dynamics of response preparation. We found that, compared to easy pro-saccades, successfully preparing a difficult anti-saccadic response was characterized by an increase in MFC theta and the suppression of posterior alpha power prior to executing the eye movement. These findings align with a large body of literature on performance monitoring and ERPs, and indicate that MFNs, along with their signature in theta power, reflects the general process of controlling attention and adapting behaviour without the need to induce error commission, the inhibition of responses, or the presentation of negative feedback.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans ce texte, nous revoyons certains développements récents de l’économétrie qui peuvent être intéressants pour des chercheurs dans des domaines autres que l’économie et nous soulignons l’éclairage particulier que l’économétrie peut jeter sur certains thèmes généraux de méthodologie et de philosophie des sciences, tels la falsifiabilité comme critère du caractère scientifique d’une théorie (Popper), la sous-détermination des théories par les données (Quine) et l’instrumentalisme. En particulier, nous soulignons le contraste entre deux styles de modélisation - l’approche parcimonieuse et l’approche statistico-descriptive - et nous discutons les liens entre la théorie des tests statistiques et la philosophie des sciences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.