320 resultados para Interval estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock when there is individual variability in the von Bertalanffy growth parameter L-infinity and investigate the possible bias in the estimates when the individual variability is ignored. Three methods are examined: (i) the regression method based on the Beverton and Holt's (1956, Rapp. P.V. Reun. Cons. Int. Explor. Mer, 140: 67-83) equation; (ii) the moment method of Powell (1979, Rapp. PV. Reun. Int. Explor. Mer, 175: 167-169); and (iii) a generalization of Powell's method that estimates the individual variability to be incorporated into the estimation. It is found that the biases in the estimates from the existing methods are, in general, substantial, even when individual variability in growth is small and recruitment is uniform, and the generalized method performs better in terms of bias but is subject to a larger variation. There is a need to develop robust and flexible methods to deal with individual variability in the analysis of length-frequency data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519-1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi (Lates calcarifer) is analysed to estimate the growth parameters and possible effect of tagging on growth.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the effect that text pre-processing approaches have on the estimation of the readability of web pages. Readability has been highlighted as an important aspect of web search result personalisation in previous work. The most widely used text readability measures rely on surface level characteristics of text, such as the length of words and sentences. We demonstrate that different tools for extracting text from web pages lead to very different estimations of readability. This has an important implication for search engines because search result personalisation strategies that consider users reading ability may fail if incorrect text readability estimations are computed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach for dynamic state estimation of aggregated generators by introducing a new correction factor for equivalent inter-area power flows. The spread of generators from the center of inertia of each area is summarized by the correction term α on the equivalent power flow between the areas and is applied to the identification and estimation process. A nonlinear time varying Kalman filter is applied to estimate the equivalent angles and velocities of coherent areas by reducing the effect of local modes on the estimated states. The approach is simulated on two test systems and the results show the effect of the correction factor and the performance of the state estimation by estimating the inter-area dynamics of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Common diseases such as endometriosis (ED), Alzheimer's disease (AD) and multiple sclerosis (MS) account for a significant proportion of the health care burden in many countries. Genome-wide association studies (GWASs) for these diseases have identified a number of individual genetic variants contributing to the risk of those diseases. However, the effect size for most variants is small and collectively the known variants explain only a small proportion of the estimated heritability. We used a linear mixed model to fit all single nucleotide polymorphisms (SNPs) simultaneously, and estimated genetic variances on the liability scale using SNPs from GWASs in unrelated individuals for these three diseases. For each of the three diseases, case and control samples were not all genotyped in the same laboratory. We demonstrate that a careful analysis can obtain robust estimates, but also that insufficient quality control (QC) of SNPs can lead to spurious results and that too stringent QC is likely to remove real genetic signals. Our estimates show that common SNPs on commercially available genotyping chips capture significant variation contributing to liability for all three diseases. The estimated proportion of total variation tagged by all SNPs was 0.26 (SE 0.04) for ED, 0.24 (SE 0.03) for AD and 0.30 (SE 0.03) for MS. Further, we partitioned the genetic variance explained into five categories by a minor allele frequency (MAF), by chromosomes and gene annotation. We provide strong evidence that a substantial proportion of variation in liability is explained by common SNPs, and thereby give insights into the genetic architecture of the diseases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anticipating the number and identity of bidders has significant influence in many theoretical results of the auction itself and bidders’ bidding behaviour. This is because when a bidder knows in advance which specific bidders are likely competitors, this knowledge gives a company a head start when setting the bid price. However, despite these competitive implications, most previous studies have focused almost entirely on forecasting the number of bidders and only a few authors have dealt with the identity dimension qualitatively. Using a case study with immediate real-life applications, this paper develops a method for estimating every potential bidder’s probability of participating in a future auction as a function of the tender economic size removing the bias caused by the contract size opportunities distribution. This way, a bidder or auctioner will be able to estimate the likelihood of a specific group of key, previously identified bidders in a future tender.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Smoking and physical inactivity are major risk factors for heart disease. Linking strategies that promote improvements in fitness and assist quitting smoking has potential to address both these risk factors simultaneously. The objective of this study is to compare the effects of two exercise interventions (high intensity interval training (HIIT) and lifestyle physical activity) on smoking cessation in female smokers. Method/design: This study will use a randomised controlled trial design. Participants: Women aged 18–55 years who smoke ≥ 5 cigarettes/day, and want to quit smoking. Intervention: all participants will receive usual care for quitting smoking. Group 1 - will complete two gym-based supervised HIIT sessions/week and one home-based HIIT session/week. At each training session participants will be asked to complete four 4-min (4 × 4 min) intervals at approximately 90 % of maximum heart rate interspersed with 3- min recovery periods. Group 2 - participants will receive a resource pack and pedometer, and will be asked to use the 10,000 steps log book to record steps and other physical activities. The aim will be to increase daily steps to 10,000 steps/day. Analysis will be intention to treat and measures will include smoking cessation, withdrawal and cravings, fitness, physical activity, and well-being. Discussion: The study builds on previous research suggesting that exercise intensity may influence the efficacy of exercise as a smoking cessation intervention. The hypothesis is that HIIT will improve fitness and assist women to quit smoking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Terrain traversability estimation is a fundamental requirement to ensure the safety of autonomous planetary rovers and their ability to conduct long-term missions. This paper addresses two fundamental challenges for terrain traversability estimation techniques. First, representations of terrain data, which are typically built by the rover’s onboard exteroceptive sensors, are often incomplete due to occlusions and sensor limitations. Second, during terrain traversal, the rover-terrain interaction can cause terrain deformation, which may significantly alter the difficulty of traversal. We propose a novel approach built on Gaussian process (GP) regression to learn, and consequently to predict, the rover’s attitude and chassis configuration on unstructured terrain using terrain geometry information only. First, given incomplete terrain data, we make an initial prediction under the assumption that the terrain is rigid, using a learnt kernel function. Then, we refine this initial estimate to account for the effects of potential terrain deformation, using a near-to-far learning approach based on multitask GP regression. We present an extensive experimental validation of the proposed approach on terrain that is mostly rocky and whose geometry changes as a result of loads from rover traversals. This demonstrates the ability of the proposed approach to accurately predict the rover’s attitude and configuration in partially occluded and deformable terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data-driven approaches such as Gaussian Process (GP) regression have been used extensively in recent robotics literature to achieve estimation by learning from experience. To ensure satisfactory performance, in most cases, multiple learning inputs are required. Intuitively, adding new inputs can often contribute to better estimation accuracy, however, it may come at the cost of a new sensor, larger training dataset and/or more complex learning, some- times for limited benefits. Therefore, it is crucial to have a systematic procedure to determine the actual impact each input has on the estimation performance. To address this issue, in this paper we propose to analyse the impact of each input on the estimate using a variance-based sensitivity analysis method. We propose an approach built on Analysis of Variance (ANOVA) decomposition, which can characterise how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We apply the proposed approach to a terrain-traversability estimation method we proposed in prior work, which is based on multi-task GP regression, and we validate this implementation experimentally using a rover on a Mars-analogue terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concession agreement is the core feature of BOT projects, with the concession period being the most essential feature in determining the time span of the various rights, obligations and responsibilities of the government and concessionaire. Concession period design is therefore crucial for financial viability and determining the benefit/cost allocation between the host government and the concessionaire. However, while the concession period and project life span are essentially interdependent, most methods to date consider their determination as contiguous events that are determined exogenously. Moreover, these methods seldom consider the, often uncertain, social benefits and costs involved that are critical in defining, pricing and distributing benefits and costs between the various parties and evaluating potentially distributable cash flows. In this paper, we present the results of the first stage of a research project aimed at determining the optimal build-operate-transfer (BOT) project life span and concession period endogenously and interdependently by maximizing the combined benefits of stakeholders. Based on the estimation of the economic and social development involved, a negotiation space of the concession period interval is obtained, with its lower boundary creating the desired financial return for the private investors and its upper boundary ensuring the economic feasibility of the host government as well as the maximized welfare within the project life. The outcome of the new quantitative model is considered as a suitable basis for future field trials prior to implementation. The structure and details of the model are provided in the paper with Hong Kong tunnel project as a case study to demonstrate its detailed application. The basic contributions of the paper to the theory of construction procurement are that the project life span and concession period are determined jointly and the social benefits taken into account in the examination of project financial benefits. In practical terms, the model goes beyond the current practice of linear-process thinking and should enable engineering consultants to provide project information more rationally and accurately to BOT project bidders and increase the government's prospects of successfully entering into a contract with a concessionaire. This is expected to generate more negotiation space for the government and concessionaire in determining the major socioeconomic features of individual BOT contracts when negotiating the concession period. As a result, the use of the model should increase the total benefit to both parties.