929 resultados para likelihood ratio test


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a method to integrate environmental time series into stock assessment models and to test the significance of correlations between population processes and the environmental time series. Parameters that relate the environmental time series to population processes are included in the stock assessment model, and likelihood ratio tests are used to determine if the parameters improve the fit to the data significantly. Two approaches are considered to integrate the environmental relationship. In the environmental model, the population dynamics process (e.g. recruitment) is proportional to the environmental variable, whereas in the environmental model with process error it is proportional to the environmental variable, but the model allows an additional temporal variation (process error) constrained by a log-normal distribution. The methods are tested by using simulation analysis and compared to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. In the traditional method, the estimates of recruitment were provided by a model that allowed the recruitment only to have a temporal variation constrained by a log-normal distribution. We illustrate the methods by applying them to test the statistical significance of the correlation between sea-surface temperature (SST) and recruitment to the snapper (Pagrus auratus) stock in the Hauraki Gulf–Bay of Plenty, New Zealand. Simulation analyses indicated that the integrated approach with additional process error is superior to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. The results suggest that, for the snapper stock, recruitment is positively correlated with SST at the time of spawning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aims: To determine whether routine outpatient monitoring of growth predicts adrenal suppression in prepubertal children treated with high dose inhaled glucocorticoid.

Methods: Observational study of 35 prepubertal children (aged 4–10 years) treated with at least 1000 µg/day of inhaled budesonide or equivalent potency glucocorticoid for at least six months. Main outcome measures were: changes in HtSDS over 6 and 12 month periods preceding adrenal function testing, and increment and peak cortisol after stimulation by low dose tetracosactrin test. Adrenal suppression was defined as a peak cortisol 500 nmol/l.

Results: The areas under the receiver operator characteristic curves for a decrease in HtSDS as a predictor of adrenal insufficiency 6 and 12 months prior to adrenal testing were 0.50 (SE 0.10) and 0.59 (SE 0.10). Prediction values of an HtSDS change of –0.5 for adrenal insufficiency at 12 months prior to testing were: sensitivity 13%, specificity 95%, and positive likelihood ratio of 2.4. Peak cortisol reached correlated poorly with change in HtSDS ( = 0.23, p = 0.19 at 6 months; = 0.33, p = 0.06 at 12 months).

Conclusions: Monitoring growth does not enable prediction of which children treated with high dose inhaled glucocorticoids are at risk of potentially serious adrenal suppression. Both growth and adrenal function should be monitored in patients on high dose inhaled glucocorticoids. Further research is required to determine the optimal frequency of monitoring adrenal function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study aimed to examine the structure of the statistics anxiety rating scale. Responses from 650 undergraduate psychology students throughout the UK were collected through an on-line study. Based on previous research three different models were specified and estimated using confirmatory factor analysis. Fit indices were used to determine if the model fitted the data and a likelihood ratio difference test was used to determine the best fitting model. The original six factor model was the best explanation of the data. All six subscales were intercorrelated and internally consistent. It was concluded that the statistics anxiety rating scale was found to measure the six subscales it was designed to assess in a UK population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The impact of bronchiectasis on sedentary behaviour and physical activity is unknown. It is important to explore this to identify the need for physical activity interventions and how to tailor interventions to this patient population. We aimed to explore the patterns and correlates of sedentary behaviour and physical activity in bronchiectasis.

METHODS: Physical activity was assessed in 63 patients with bronchiectasis using an ActiGraph GT3X+ accelerometer over seven days. Patients completed: questionnaires on health-related quality-of-life and attitudes to physical activity (questions based on an adaption of the transtheoretical model (TTM) of behaviour change); spirometry; and the modified shuttle test (MST). Multiple linear regression analysis using forward selection based on likelihood ratio statistics explored the correlates of sedentary behaviour and physical activity dimensions. Between-group analysis using independent sample t-tests were used to explore differences for selected variables.

RESULTS: Fifty-five patients had complete datasets. Average daily time, mean(standard deviation) spent in sedentary behaviour was 634(77)mins, light-lifestyle physical activity was 207(63)mins and moderate-vigorous physical activity (MVPA) was 25(20)mins. Only 11% of patients met recommended guidelines. Forced expiratory volume in one-second percentage predicted (FEV1% predicted) and disease severity were not correlates of sedentary behaviour or physical activity. For sedentary behaviour, decisional balance 'pros' score was the only correlate. Performance on the MST was the strongest correlate of physical activity. In addition to the MST, there were other important correlate variables for MVPA accumulated in ≥10-minute bouts (QOL-B Social Functioning) and for activity energy expenditure (Body Mass Index and QOL-B Respiratory Symptoms).

CONCLUSIONS: Patients with bronchiectasis demonstrated a largely inactive lifestyle and few met the recommended physical activity guidelines. Exercise capacity was the strongest correlate of physical activity, and dimensions of the QOL-B were also important. FEV1% predicted and disease severity were not correlates of sedentary behaviour or physical activity. The inclusion of a range of physical activity dimensions could facilitate in-depth exploration of patterns of physical activity. This study demonstrates the need for interventions targeted at reducing sedentary behaviour and increasing physical activity, and provides information to tailor interventions to the bronchiectasis population.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In 2003, prostate cancer (PCa) is estimated to be the most commonly diagnosed cancer and third leading cause of cancer death in Canada. During PCa population screening, approximately 25% of patients with a normal digital rectal examination (DRE) and intermediate serum prostate specific antigen (PSA) level have PCa. Since all patients typically undergo biopsy, it is expected that approximately 75% of these procedures are unnecessary. The purpose of this study was to compare the degree of efficacy of clinical tests and algorithms in stage II screening for PCa while preventing unnecessary biopsies from occurring. The sample consisted of 201 consecutive men who were suspected of PCa based on the results of a DRE and serum PSA. These men were referred for venipuncture and transrectal ultrasound (TRUS). Clinical tests included TRUS, agespecific reference range PSA (Age-PSA), prostate specific antigen density (PSAD), and free-to-total prostate specific antigen ratio (%fPSA). Clinical results were evaluated individually and within algorithms. Cutoffs of 0.12 and 0.15 ng/ml/cc were employed for PSAD. Cutoffs that would provide a minimum sensitivity of 0.90 and 0.95, respectively were utilized for %fPSA. Statistical analysis included ROC curve analysis, calculated sensitivity (Sens), specificity (Spec), and positive likelihood ratio (LR), with corresponding confidence intervals (Cl). The %fPSA, at a 23% cutoff ({ Sens=0.92; CI, 0.06}, {Spec=0.4l; CI, 0.09}, {LR=1.56; CI, O.ll}), proved to be the most efficacious independent clinical test. The combination of PSAD (cutoff 0.15 ng/ml/cc) and %fPSA (cutoff 23%) ({Sens=0.93; CI, 0.06}, {Spec=0.38; CI, 0.08}, {LR=1.50; CI, 0.10}) was the most efficacious clinical algorithm. This study advocates the use of %fPSA at a cutoff of 23% when screening patients with an intermediate serum PSA and benign DRE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objetivo: establecer la prevalencia de la disfunción sexual en las pacientes sometidas a corrección de incontinencia urinaria por medio de la uretrocistopexia con cabestrillo en el Hospital Universitario Mayor. Metodología: se realizo un estudio analítico de corte transversal donde se evaluara la disfunción sexual en pacientes con incontinencia urinaria por medio de la encuesta PISQ-12 después de 6 meses de la realización de cabestrillo suburetral dentro del manejo de incontinencia urinaria femenina y buscarán la asociación con otros factores como el tipo de cirugía asociada al cabestrillo, menopausia, terapia hormonal, edad, número de embarazos utilizando la prueba de asociación ji-cuadrado de Pearson o el test exacto de Fisher o razón de verosimilitud exacta (valores esperados < 5). Resultados: la prevalencia de disfunción sexual fue del 27% (12 pacientes), de ellas 25% tuvieron disfunción moderada (11 pacientes) y dos por ciento disfunción severa (1 paciente) deacuerdo con la escala PISQ-12. La disfunción sexual fue más frecuente en las pacientes con prolapso posterior estadio 2 (4 de las 5 mujeres), seguido del prolapso anterior y posterior estadio 2 (4 de 10 mujeres), las otras categorías fueron menores, mostrando asociación significativa (p=0.011, Test exacto de Razón de verosimilitud). Conclusión: del presente estudio podemos concluir que la cirugía de piso pelvico (colporrafías) contomintante a la cirugía de incontinencia urinaria con cabestrillo suburetral está asociada a un mayor grado de disfunción sexual femenina.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTRODUCCIÓN: El diagnóstico de Tromboembolismo Pulmonar (TEP) ha sido un reto clínico a pesar de los avances en modalidades diagnósticas y opciones terapéuticas, el TEP permanece como una entidad sub diagnosticada y letal. La medición en sangre del Dímero D, con punto de corte de 500 mcg/L, por lo tanto es una excelente prueba de tamizaje para los pacientes en el departamento de urgencias . Esta evaluación inicial debe ser complementada con la realización de angioTAC de tórax, decisión que debe ser tomada precozmente con el fin de evitar complicaciones que amenacen la vida METODOLOGIA: Se realizo un estudio de prueba diagnóstica retrospectivo donde se revisaron las historias clínicas de 109 pacientes adultos de la Fundación Santa Fe de Bogotá en quienes se realizo angioTAC de tórax con protocolo para TEP, con probabilidad diagnóstica de Tromboembolismo Pulmonar Baja o Intermedia por criterios de Wells y que además tengan Dímero D. Se calculo la sensibilidad y especificidad del Dímero D teniendo en cuenta la probabilidad clínica pre test calculada por criterios de Wells, y se calcularon likelihood ratio positivo y negativo para cada punto de corte de Dímero D. RESULTADOS: El estudio mostro una sensibilidad del 100% para valores de Dímero D menores de 1100 mcg/L, en pacientes con baja probabilidad, y sensibilidad de 100% para valores menores de 700 mcg/L en pacientes con probabilidad intermedia. DISCUSIÓN: Pacientes con baja probabilidad pre test por criterios de Wells con valores de Dímero D menores de 1100 mcg/L y de probabilidad intermedia con valores menores de 700 mcg/L no requieren estudios adicionales, lo cual disminuye de manera importante la toma de angioTAC y reduce costos de atención.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La concentración de ácido láctico en LCR en pacientes con sospecha de meningitis postquirúrgica luego de clipaje de aneurisma cerebral y hemorragia subaracnoidea espontánea se midió prospectivamente por un período de tres años. Se analizaron un total de 32 muestras de líquido cefalorraquídeo, se midió la concentración de ácido láctico y se comparó con el cultivo de LCR. Los cultivos fueron positivos en cinco pacientes, con una prevalencia de infección del 15%. Se utilizó un valor umbral de ácido láctico de 4 mmol/L. y se encontró una sensibilidad del 80%, especificidad del 52%, VPP del 23%, VPN del 93%, y likelihood ratio (LHR) positivo de 1,66 con una probabilidad post test de 15% de la concentración del ácido láctico en el diagnóstico de meningitis postquirúrgica en pacientes con hemorragia subaracnoidea aneurismática. La concentración de ácido láctico en LCR tiene un desempeño limitado en el diagnóstico de meningitis postquirúrgica en pacientes con hemorragia subaracnoidea aneurismática.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For many networks in nature, science and technology, it is possible to order the nodes so that most links are short-range, connecting near-neighbours, and relatively few long-range links, or shortcuts, are present. Given a network as a set of observed links (interactions), the task of finding an ordering of the nodes that reveals such a range-dependent structure is closely related to some sparse matrix reordering problems arising in scientific computation. The spectral, or Fiedler vector, approach for sparse matrix reordering has successfully been applied to biological data sets, revealing useful structures and subpatterns. In this work we argue that a periodic analogue of the standard reordering task is also highly relevant. Here, rather than encouraging nonzeros only to lie close to the diagonal of a suitably ordered adjacency matrix, we also allow them to inhabit the off-diagonal corners. Indeed, for the classic small-world model of Watts & Strogatz (1998, Collective dynamics of ‘small-world’ networks. Nature, 393, 440–442) this type of periodic structure is inherent. We therefore devise and test a new spectral algorithm for periodic reordering. By generalizing the range-dependent random graph class of Grindrod (2002, Range-dependent random graphs and their application to modeling large small-world proteome datasets. Phys. Rev. E, 66, 066702-1–066702-7) to the periodic case, we can also construct a computable likelihood ratio that suggests whether a given network is inherently linear or periodic. Tests on synthetic data show that the new algorithm can detect periodic structure, even in the presence of noise. Further experiments on real biological data sets then show that some networks are better regarded as periodic than linear. Hence, we find both qualitative (reordered networks plots) and quantitative (likelihood ratios) evidence of periodicity in biological networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main purpose of this work is to study the behaviour of Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-32] adjusted likelihood ratio statistic in testing simple hypothesis in a new class of regression models proposed here. The proposed class of regression models considers Dirichlet distributed observations, and the parameters that index the Dirichlet distributions are related to covariates and unknown regression coefficients. This class is useful for modelling data consisting of multivariate positive observations summing to one and generalizes the beta regression model described in Vasconcellos and Cribari-Neto [Vasconcellos, K.L.P., Cribari-Neto, F., 2005. Improved maximum likelihood estimation in a new class of beta regression models. Brazilian journal of Probability and Statistics 19,13-31]. We show that, for our model, Skovgaard`s adjusted likelihood ratio statistics have a simple compact form that can be easily implemented in standard statistical software. The adjusted statistic is approximately chi-squared distributed with a high degree of accuracy. Some numerical simulations show that the modified test is more reliable in finite samples than the usual likelihood ratio procedure. An empirical application is also presented and discussed. (C) 2009 Elsevier B.V. All rights reserved.