929 resultados para likelihood ratio test


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Growth of a temperate reefa-ssociated fish, the purple wrasse (Notolabrus fucicola), was examined from two sites on the east coast of Tasmania by using age- and length-based models. Models based on the von Bertalanffy growth function, in the standard and a reparameterized form, were constructed by using otolith-derived age estimates. Growth trajectories from tag-recaptures were used to construct length-based growth models derived from the GROTAG model, in turn a reparameterization of the Fabens model. Likelihood ratio tests (LRTs) determined the optimal parameterization of the GROTAG model, including estimators of individual growth variability, seasonal growth, measurement error, and outliers for each data set. Growth models and parameter estimates were compared by bootstrap confidence intervals, LRTs, and randomization tests and plots of bootstrap parameter estimates. The relative merit of these methods for comparing models and parameters was evaluated; LRTs combined with bootstrapping and randomization tests provided the most insight into the relationships between parameter estimates. Significant differences in growth of purple wrasse were found between sites in both length- and age-based models. A significant difference in the peak growth season was found between sites, and a large difference in growth rate between sexes was found at one site with the use of length-based models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Samples of 11,000 King George whiting (Sillaginodes punctata) from the South Australian commercial and recreational catch, supplemented by research samples, were aged from otoliths. Samples were analyzed from three coastal regions and by sex. Most sampling was undertaken at fish processing plants, from which only fish longer than the legal minimum length were obtained. A left-truncated normal distribution of lengths at monthly age was therefore employed as model likelihood. Mean length-at-monthly-age was described by a generalized von Bertalanffy formula with sinusoidal seasonality. Likelihood standard deviation was modeled to vary allometrically with mean length. A range of related formulas (with 6 to 8 parameters) for seasonal mean length at age were compared. In addition to likelihood ratio tests of relative fit, model selection criteria were a minimum occurrence of high uncertainties (>20% SE), of high correlations (>0.9, >0.95, and >0.99) and of parameter estimates at their biological limits, and we sought a model with a minimum number of parameters. A generalized von Bertalanffy formula with t0 fixed at 0 was chosen. The truncated likelihood alleviated the overestimation bias of mean length at age that would otherwise accrue from catch samples being restricted to legal sizes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a study which linked demographic variables with barriers affecting the adoption of domestic energy efficiency measures in large UK cities. The aim was to better understand the 'Energy Efficiency Gap' and improve the effectiveness of future energy efficiency initiatives. The data for this study was collected from 198 general population interviews (1.5-10 min) carried out across multiple locations in Manchester and Cardiff. The demographic variables were statistically linked to the identified barriers using a modified chi-square test of association (first order Rao-Scott corrected to compensate for multiple response data), and the effect size was estimated with an odds-ratio test. The results revealed that strong associations exist between demographics and barriers, specifically for the following variables: sex; marital status; education level; type of dwelling; number of occupants in household; residence (rent/own); and location (Manchester/Cardiff). The results and recommendations were aimed at city policy makers, local councils, and members of the construction/retrofit industry who are all working to improve the energy efficiency of the domestic built environment. © 2012 Elsevier Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

C.G.G. Aitken, Q. Shen, R. Jensen and B. Hayes. The evaluation of evidence for exponentially distributed data. Computational Statistics & Data Analysis, vol. 51, no. 12, pp. 5682-5693, 2007.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three experiments investigated the effect of rarity on people's selection and interpretation of data in a variant of the pseudodiagnosticity task. For familiar (Experiment 1) but not for arbitrary (Experiment 3) materials, participants were more likely to select evidence so as to complete a likelihood ratio when the initial evidence they received was a single likelihood concerning a rare feature. This rarity effect with familiar materials was replicated in Experiment 2 where it was shown that participants were relatively insensitive to explicit manipulations of the likely diagnosticity of rare evidence. In contrast to the effects for data selection, there was an effect of rarity on confidence ratings after receipt of a single likelihood for arbitrary but not for familiar materials. It is suggested that selecting diagnostic evidence necessitates explicit consideration of the alternative hypothesis and that consideration of the possible consequences of the evidence for the alternative weakens the rarity effect in confidence ratings. Paradoxically, although rarity effects in evidence selection and confidence ratings are in the spirit of Bayesian reasoning, the effect on confidence ratings appears to rely on participants thinking less about the alternative hypothesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we argue that it is often adaptive to use one’s background beliefs when interpreting information that, from a normative point of view, is incomplete. In both of the experiments reported here participants were presented with an item possessing two features and were asked to judge, in the light of some evidence concerning the features, to which of two categories it was more likely that the item belonged. It was found that when participants received evidence relevant to just one of these hypothesised categories (i.e. evidence that did not form a Bayesian likelihood ratio) they used their background beliefs to interpret this information. In Experiment 2, on the other hand, participants behaved in a broadly Bayesian manner when the evidence they received constituted a completed likelihood ratio. We discuss the circumstances under which participants, when making their judgements, consider the alternative hypothesis. We conclude with a discussion of the implications of our results for an understanding of hypothesis testing, belief revision, and categorisation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims: To evaluate the role of novel biomarkers in early detection of acute myocardial infarction (MI) in patients admitted with acute chest pain.
Methods and results: A prospective study of 664 patients presenting to two coronary care units with chest pain was conducted over 3 years from 2003. Patients were assessed on admission: clinical characteristics, ECG (electrocardiogram), renal function, cardiac troponin T (cTnT), heart fatty acid binding protein (H-FABP), glycogen phosphorylase-BB, NT-pro-brain natriuretic peptide, D-dimer, hsCRP (high sensitivity C-reactive protein), myeloperoxidase, matrix metalloproteinase-9, pregnancy associated plasma protein-A, soluble CD40 ligand. A =12 h cTnT sample was also obtained. MI was defined as cTnT = 0.03 µg/L. In patients presenting <4 h of symptom onset, sensitivity of H-FABP for MI was significantly higher than admission cTnT (73 vs. 55%; P = 0.043). Specificity of H-FABP was 71%. None of the other biomarkers challenged cTnT. Combined use of H-FABP and cTnT (either one elevated initially) significantly improved the sensitivities of H-FABP or cTnT (85%; P = 0.004). This combined approach also improved the negative predictive value, negative likelihood ratio, and the risk ratio.
Conclusion: Assessment of H-FABP within the first 4 h of symptoms is superior to cTnT for detection of MI, and is a useful additional biomarker for patients with acute chest pain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context. It has been established that the classical gas-phase production of interstellar methanol (CH3OH) cannot explain observed abundances. Instead it is now generally thought that the main formation path has to be by successive hydrogenation of solid CO on interstellar grain surfaces. Aims. While theoretical models and laboratory experiments show that methanol is efficiently formed from CO on cold grains, our aim is to test this scenario by astronomical observations of gas associated with young stellar objects (YSOs). Methods. We have observed the rotational transition quartets J = 2K – 1K of 12CH3OH and 13CH3OH at 96.7 and 94.4 GHz, respectively, towards a sample of massive YSOs in different stages of evolution. In addition, the J = 1-0 transitions of 12C18O and 13C18O were observed towards some of these sources. We use the 12C/13C ratio to discriminate between gas-phase and grain surface origin: If methanol is formed from CO on grains, the ratios should be similar in CH3OH and CO. If not, the ratio should be higher in CH3OH due to 13C fractionation in cold CO gas. We also estimate the abundance ratios between the nuclear spin types of methanol (E and A). If methanol is formed on grains, this ratio is likely to have been thermalized at the low physical temperature of the grain, and therefore show a relative over-abundance of A-methanol. Results. We show that the 12C/13C isotopic ratio is very similar in gas-phase CH3OH and C18O, on the spatial scale of about 40 arcsec, towards four YSOs. For two of our sources we find an overabundance of A-methanol as compared to E-methanol, corresponding to nuclear spin temperatures of 10 and 16 K. For the remaining five sources, the methanol E/A ratio is less than unity. Conclusions. While the 12C/13C ratio test is consistent with methanol formation from hydrogenation of CO on grain surfaces, the result of the E/A ratio test is inconclusive.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de doutoramento, Ciências Biomédicas (Neurociências), Universidade de Lisboa, Faculdade de Medicina, 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evaluation of forensic evidence can occur at any level within the hierarchy of propositions depending on the question being asked and the amount and type of information that is taken into account within the evaluation. Commonly DNA evidence is reported given propositions that deal with the sub-source level in the hierarchy, which deals only with the possibility that a nominated individual is a source of DNA in a trace (or contributor to the DNA in the case of a mixed DNA trace). We explore the use of information obtained from examinations, presumptive and discriminating tests for body fluids, DNA concentrations and some case circumstances within a Bayesian network in order to provide assistance to the Courts that have to consider propositions at source level. We use a scenario in which the presence of blood is of interest as an exemplar and consider how DNA profiling results and the potential for laboratory error can be taken into account. We finish with examples of how the results of these reports could be presented in court using either numerical values or verbal descriptions of the results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Margin policy is used by regulators for the purpose of inhibiting exceSSIve volatility and stabilizing the stock market in the long run. The effect of this policy on the stock market is widely tested empirically. However, most prior studies are limited in the sense that they investigate the margin requirement for the overall stock market rather than for individual stocks, and the time periods examined are confined to the pre-1974 period as no change in the margin requirement occurred post-1974 in the U.S. This thesis intends to address the above limitations by providing a direct examination of the effect of margin requirement on return, volume, and volatility of individual companies and by using more recent data in the Canadian stock market. Using the methodologies of variance ratio test and event study with conditional volatility (EGARCH) model, we find no convincing evidence that change in margin requirement affects subsequent stock return volatility. We also find similar results for returns and trading volume. These empirical findings lead us to conclude that the use of margin policy by regulators fails to achieve the goal of inhibiting speculating activities and stabilizing volatility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper tests the predictions of the Barro-Gordon model using US data on inflation and unemployment. To that end, it constructs a general game-theoretical model with asymmetric preferences that nests the Barro-Gordon model and a version of Cukierman’s model as special cases. Likelihood Ratio tests indicate that the restriction imposed by the Barro-Gordon model is rejected by the data but the one imposed by the version of Cukierman’s model is not. Reduced-form estimates are consistent with the view that the Federal Reserve weights more heavily positive than negative unemployment deviations from the expected natural rate.