929 resultados para likelihood ratio test


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assigning probabilities to alleged relationships, given DNA profiles, requires, among other things, calculation of a likelihood ratio (LR). Such calculations usually assume independence of genes: this assumption is not appropriate when the tested individuals share recent ancestry due to population substructure. Adjusted LR formulae, incorporating the coancestry coefficient F(ST), are presented here for various two-person relationships, and the issue of mutations in parentage testing is also addressed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter considers the Multiband Orthogonal Frequency Division Multiplexing (MB- OFDM) modulation and demodulation with the intention to optimize the Ultra-Wideband (UWB) system performance. OFDM is a type of multicarrier modulation and becomes the most important aspect for the MB-OFDM system performance. It is also a low cost digital signal component efficiently using Fast Fourier Transform (FFT) algorithm to implement the multicarrier orthogonality. Within the MB-OFDM approach, the OFDM modulation is employed in each 528 MHz wide band to transmit the data across the different bands while also using the frequency hopping technique across different bands. Each parallel bit stream can be mapped onto one of the OFDM subcarriers. Quadrature Phase Shift Keying (QPSK) and Dual Carrier Modulation (DCM) are currently used as the modulation schemes for MB-OFDM in the ECMA-368 defined UWB radio platform. A dual QPSK soft-demapper is suitable for ECMA-368 that exploits the inherent Time-Domain Spreading (TDS) and guard symbol subcarrier diversity to improve the receiver performance, yet merges decoding operations together to minimize hardware and power requirements. There are several methods to demap the DCM, which are soft bit demapping, Maximum Likelihood (ML) soft bit demapping, and Log Likelihood Ratio (LLR) demapping. The Channel State Information (CSI) aided scheme coupled with the band hopping information is used as a further technique to improve the DCM demapping performance. ECMA-368 offers up to 480 Mb/s instantaneous bit rate to the Medium Access Control (MAC) layer, but depending on radio channel conditions dropped packets unfortunately result in a lower throughput. An alternative high data rate modulation scheme termed Dual Circular 32-QAM that fits within the configuration of the current standard increasing system throughput thus maintaining the high rate throughput even with a moderate level of dropped packets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim To develop a brief, parent-completed instrument (‘ERIC’) for detection of cognitive delay in 10-24 month-olds born preterm, or with low birth weight, or with perinatal complications, and to establish its diagnostic properties. Method Scores were collected from parents of 317 children meeting ≥1 inclusion criteria (birth weight <1500g; gestational age <34 completed weeks; 5-minute Apgar <7; presence of hypoxic-ischemic encephalopathy) and meeting no exclusion criteria. Children were assessed for cognitive delay using a criterion score on the Bayley Scales of Infant and Toddler Development Cognitive Scale III1 <80. Items were retained according to their individual associations with delay. Sensitivity, specificity, Positive and Negative Predictive Values were estimated and a truncated ERIC was developed for use <14 months. Results ERIC detected 17 out of 18 delayed children in the sample, with 94.4% sensitivity (95% CI [confidence interval] 83.9-100%), 76.9% specificity (72.1-81.7%), 19.8% positive predictive value (11.4-28.2%); 99.6% negative predictive value (98.7-100%); 4.09 likelihood ratio positive; and 0.07 likelihood ratio negative; the associated Area under the Curve was .909 (.829-.960). Interpretation ERIC has potential value as a quickly-administered diagnostic instrument for the absence of early cognitive delay in preterm or premature infants of 10-24 months, and as a screen for cognitive delay. Further research may be needed before ERIC can be recommended for wide-scale use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a new lifetime distribution which can handle bathtub-shaped unimodal increasing and decreasing hazard rate functions The model has three parameters and generalizes the exponential power distribution proposed by Smith and Bain (1975) with the inclusion of an additional shape parameter The maximum likelihood estimation procedure is discussed A small-scale simulation study examines the performance of the likelihood ratio statistics under small and moderate sized samples Three real datasets Illustrate the methodology (C) 2010 Elsevier B V All rights reserved

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce in this paper a new class of discrete generalized nonlinear models to extend the binomial, Poisson and negative binomial models to cope with count data. This class of models includes some important models such as log-nonlinear models, logit, probit and negative binomial nonlinear models, generalized Poisson and generalized negative binomial regression models, among other models, which enables the fitting of a wide range of models to count data. We derive an iterative process for fitting these models by maximum likelihood and discuss inference on the parameters. The usefulness of the new class of models is illustrated with an application to a real data set. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we discuss inferential aspects for the Grubbs model when the unknown quantity x (latent response) follows a skew-normal distribution, extending early results given in Arellano-Valle et al. (J Multivar Anal 96:265-281, 2005b). Maximum likelihood parameter estimates are computed via the EM-algorithm. Wald and likelihood ratio type statistics are used for hypothesis testing and we explain the apparent failure of the Wald statistics in detecting skewness via the profile likelihood function. The results and methods developed in this paper are illustrated with a numerical example.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Researchers typically tackle questions by constructing powerful, highlyreplicated sampling protocols or experimental designs. Such approaches often demand large samples sizes and are usually only conducted on a once-off basis. In contrast, many industries need to continually monitor phenomena such as equipment reliability, water quality, or the abundance of a pest. In such instances, costs and time inherent in sampling preclude the use of highlyintensive methods. Ideally, one wants to collect the absolute minimum number of samples needed to make an appropriate decision. Sequential sampling, wherein the sample size is a function of the results of the sampling process itself, offers a practicable solution. But smaller sample sizes equate to less knowledge about the population, and thus an increased risk of making an incorrect management decision. There are various statistical techniques to account for and measure risk in sequential sampling plans. We illustrate these methods and assess them using examples relating to the management of arthropod pests in commercial crops, but they can be applied to any situation where sequential sampling is used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In an earlier paper we adopted a BEKK-GARCH framework and employed a systematic approach to examine structural breaks in the HSIF and HSI volatility. Switching dummy variables were included and tested in the variance equations to check for any structural changes in the autoregressive volatility structure due to the events that have taken place in the Hong Kong market. A Bi-variate GARCH model with 3 switching points was found to be superior as it captured the potential structural changes in return volatilities. Abolishment of the uptick rule, increase of initial margins for the HSIF and electronic trading of HSIF were found to have significant impact on the volatility structure of HSIF and HSI. In this paper we include measures of daily trading volume from both markets in the estimation. Likelihood ratio tests indicate the switching dummy variables become insignificant and the GARCH effects diminish but remain significant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In an earlier paper, we adopted a bi-variate BEKK–GARCH framework and employed a systematic approach to examine structural breaks in the Hang Seng Index and Index Futures market volatility. Switching dummy variables were included and tested in the variance equations to check for any structural changes in the autoregressive volatility structure due to the events that have taken place in the Hong Kong market surrounding the Asian markets crisis. In this paper, we include measures of daily trading volume from both markets in the estimation. Likelihood ratio tests indicate the switching dummy variables become insignificant and the GARCH effects diminish but remain significant. There is some evidence that the Sequential Arrival of Information Model (SIM) provides a platform to explain these market induced effects when volume of trade is accounted for.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a novel biometric authentication method based on the recognition of drivers' dynamic handgrip on steering wheel. A pressure sensitive mat mounted on a steering wheel is employed to collect handgrip data exerted by the hands of drivers who intend to start the vehicle. Then, the likelihood-ratio-based classifier is designed to distinguish rightful driver of a car after analyzing their inherent dynamic features of grasping. The experimental results obtained in this study show that mean acceptance rates of 85.4% for the trained subjects and mean rejection rates of 82.65% for the un-trained ones are achieved by the classifier in the two batches of testing. It can be concluded that the driver verification approach based on dynamic handgrip recognition on steering wheel is a promising biometric technology and will be further explored in the near future in smart car design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel driver verification algorithm based on the recognition of handgrip patterns on steering wheel. A pressure sensitive mat mounted on a steering wheel is employed to collect a series of pressure images exerted by the hands of the drivers who intend to start the vehicle. Then, feature extraction from those images is carried out through two major steps: Quad-Tree-based multi-resolution decomposition on the images and Principle Component Analysis (PCA)-based dimension reduction, followed by implementing a likelihood-ratio classifier to distinguish drivers into known or unknown ones. The experimental results obtained in this study show that the mean acceptance rates of 78.15% and 78.22% for the trained subjects and the mean rejection rates of 93.92% and 90.93% to the un-trained ones are achieved in two trials, respectively. It can be concluded that the driver verification approach based on the handgrip recognition on steering wheel is promising and will be further explored in the near future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Para implantar o manejo apropriado do curuquerê-do-algodoeiro, é necessário construir um plano de amostragem que permita estimar, de forma rápida e precisa, a densidade populacional da praga. Esta pesquisa objetivou determinar o plano de amostragem seqüencial de Alabama argillacea (Hübner) em algodoeiro, cultivar CNPA ITA-90. Os dados foram coletados no ano agrícola de 1998/99 na Fazenda Itamarati Sul S/A, localizada no município de Ponta Porã, MS, em três áreas de 10.000 m² cada. As áreas amostrais foram compostas de 100 parcelas de 100 m². O número de lagartas pequenas, médias e grandes foi determinado semanalmente em cinco plantas tomadas ao acaso por parcela. Após verificado que todos os instares das lagartas estavam distribuídos de acordo com o modelo de distribuição agregada, ajustando-se à Distribuição Binomial Negativa durante todo o período de infestação, construiu-se um plano de amostragem seqüencial de acordo com o Teste Seqüencial da Razão de Probabilidade (TSRP). Adotou-se o nível de controle de duas lagartas por planta para a construção do plano de amostragem. A análise dos dados indicou duas linhas de decisão: a superior, que representa a condição de que a adoção de um método de controle é recomendado, definida por S1= 4,8784+1,4227n; e a inferior representando que a adoção de algum método de controle não é necessário, definida por S0= -4,8784+1,4227n. A amostragem seqüencial estimou o número máximo esperado de 16 unidades amostrais para se definir a necessidade ou não do controle.