973 resultados para Trimmed likelihood


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wastewater containing human sewage is often discharged with little or no treatment into the Antarctic marine environment. Faecal sterols (primarily coprostanol) in sediments have been used for assessment of human sewage contamination in this environment, but in situ production and indigenous faunal inputs can confound such determinations. Using gas chromatography with mass spectral detection profiles of both C27 and C29 sterols, potential sources of faecal sterols were examined in nearshore marine sediments, encompassing sites proximal and distal to the wastewater outfall at Davis Station. Faeces from indigenous seals and penguins were also examined. Faeces from several indigenous species contained significant quantities of coprostanol but not 24-ethylcoprostanol, which is present in human faeces. In situ coprostanol and 24-ethylcoprostanol production was identified by co-production of their respective epi isomers at sites remote from the wastewat er source and in high total organic matter sediments. A C 29 sterols-based polyphasic likelihood assessment matrix for human sewage contamination is presented, which distinguishes human from local fauna faecal inputs and in situ production in the Antarctic environment. Sewage contamination was detected up to 1.5 km from Davis Station.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project developed, validated and tested reliability of a risk assessment tool to predict the risk of failure to heal of patients with venous leg ulcers within 24 weeks. The risk assessment tool will allow clinicians to be able to determine realistic outcomes for their patients, promote early healing and potentially avoid weeks of inappropriate therapy. The tool will also assist in addressing specific risk factors and guide decisions on early, alternative, tailored interventions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to review, critique and develop a research agenda for the Elaboration Likelihood Model (ELM). The model was introduced by Petty and Cacioppo over three decades ago and has been modified, revised and extended. Given modern communication contexts, it is appropriate to question the model’s validity and relevance. Design/methodology/approach: The authors develop a conceptual approach, based on a fully comprehensive and extensive review and critique of ELM and its development since its inception. Findings: This paper focuses on major issues concerning the ELM. These include model assumptions and its descriptive nature; continuum questions, multi-channel processing and mediating variables before turning to the need to replicate the ELM and to offer recommendations for its future development. Research limitations/implications: This paper offers a series of questions in terms of research implications. These include whether ELM could or should be replicated, its extension, a greater conceptualization of argument quality, an explanation of movement along the continuum and between central and peripheral routes to persuasion, or to use new methodologies and technologies to help better understanding consume thinking and behaviour? All these relate to the current need to explore the relevance of ELM in a more modern context. Practical implications: It is time to question the validity and relevance of the ELM. The diversity of on- and off-line media options and the variants of consumer choice raise significant issues. Originality/value: While the ELM model continues to be widely cited and taught as one of the major cornerstones of persuasion, questions are raised concerning its relevance and validity in 21st century communication contexts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating noncritical in-car systems. Under such conditions, however, speech recognition accuracy degrades significantly, and techniques such as speech enhancement are required to improve these accuracies. Likelihood-maximizing (LIMA) frameworks optimize speech enhancement algorithms based on recognized state sequences rather than traditional signal-level criteria such as maximizing signal-to-noise ratio. LIMA frameworks typically require calibration utterances to generate optimized enhancement parameters that are used for all subsequent utterances. Under such a scheme, suboptimal recognition performance occurs in noise conditions that are significantly different from that present during the calibration session – a serious problem in rapidly changing noise environments out on the open road. In this chapter, we propose a dialog-based design that allows regular optimization iterations in order to track the ever-changing noise conditions. Experiments using Mel-filterbank noise subtraction (MFNS) are performed to determine the optimization requirements for vehicular environments and show that minimal optimization is required to improve speech recognition, avoid over-optimization, and ultimately assist with semireal-time operation. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session only.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a simple method of constructing quasi-likelihood functions for dependent data based on conditional-mean-variance relationships, and apply the method to estimating the fractal dimension from box-counting data. Simulation studies were carried out to compare this method with the traditional methods. We also applied this technique to real data from fishing grounds in the Gulf of Carpentaria, Australia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock and derive the underlying length distribution of the population and the catch when there is individual variability in the von Bertalanffy growth parameter L-infinity. The model is flexible enough to accommodate 1) any recruitment pattern as a function of both time and length, 2) length-specific selectivity, and 3) varying fishing effort over time. The maximum likelihood method gives consistent estimates, provided the underlying distribution for individual variation in growth is correctly specified. Simulation results indicate that our method is reasonably robust to violations in the assumptions. The method is applied to tiger prawn data (Penaeus semisulcatus) to obtain estimates of natural and fishing mortality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple stochastic model of a fish population subject to natural and fishing mortalities is described. The fishing effort is assumed to vary over different periods but to be constant within each period. A maximum-likelihood approach is developed for estimating natural mortality (M) and the catchability coefficient (q) simultaneously from catch-and-effort data. If there is not enough contrast in the data to provide reliable estimates of both M and q, as is often the case in practice, the method can be used to obtain the best possible values of q for a range of possible values of M. These techniques are illustrated with tiger prawn (Penaeus semisulcatus) data from the Northern Prawn Fishery of Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function that stems from the conditional mean-variance relationship. Unlike traditional QL approaches to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. A simulation study is carried out to examine the performance of the proposed QL method. Fish mortality data from quantal response experiments are used for illustration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is common to model the dynamics of fisheries using natural and fishing mortality rates estimated independently using two separate analyses. Fishing mortality is routinely estimated from widely available logbook data, whereas natural mortality estimations have often required more specific, less frequently available, data. However, in the case of the fishery for brown tiger prawn (Penaeus esculentus) in Moreton Bay, both fishing and natural mortality rates have been estimated from logbook data. The present work extended the fishing mortality model to incorporate an eco-physiological response of tiger prawn to temperature, and allowed recruitment timing to vary from year to year. These ecological characteristics of the dynamics of this fishery were ignored in the separate model that estimated natural mortality. Therefore, we propose to estimate both natural and fishing mortality rates within a single model using a consistent set of hypotheses. This approach was applied to Moreton Bay brown tiger prawn data collected between 1990 and 2010. Natural mortality was estimated by maximum likelihood to be equal to 0.032 ± 0.002 week−1, approximately 30% lower than the fixed value used in previous models of this fishery (0.045 week−1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.