7 resultados para Negative Binomial Regression Model (NBRM)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An organism living in water, and present at low density, may be distributed at random and therefore, samples taken from the water are likely to be distributed according to the Poisson distribution. The distribution of many organisms, however, is not random, individuals being either aggregated into clusters or more uniformly distributed. By fitting a Poisson distribution to data, it is only possible to test the hypothesis that an observed set of frequencies does not deviate significantly from an expected random pattern. Significant deviations from random, either as a result of increasing uniformity or aggregation, may be recognized by either rejection of the random hypothesis or by examining the variance/mean (V/M) ratio of the data. Hence, a V/M ratio not significantly different from unity indicates a random distribution, greater than unity a clustered distribution, and less then unity a regular or uniform distribution . If individual cells are clustered, however, the negative binomial distribution should provide a better description of the data. In addition, a parameter of this distribution, viz., the binomial exponent (k), may be used as a measure of the ‘intensity’ of aggregation present. Hence, this Statnote describes how to fit the negative binomial distribution to counts of a microorganism in samples taken from a freshwater environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airline industry is at the forefront of many technological developments and is often a pioneer in adopting such innovations in a large scale. It needs to improve its efficiency as the current trends for input prices and competitive pressures show that any airline will face increasingly challenging market conditions. This paper has focused on the relationship between ICT investments and efficiency in the airline industry and employed a two-stage analytical investigation, DEA, SFA and the Tobit regression model. In this study, we first estimate the productivity of the airline industry using a balanced panel of 17 airlines over the period 1999–2004 by the Data Envelop Analysis (DEA) and the Stochastic Frontier Analysis (SFA) methods. We then evaluate the impacts of the determinants of productivity in the industry concentrating on ICT. The results suggest that regardless of all the negative shocks to the airline industry during the sample period, ICT had a positive effect on productivity during 1999-2004.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the wake of the global financial crisis, several macroeconomic contributions have highlighted the risks of excessive credit expansion. In particular, too much finance can have a negative impact on growth. We examine the microeconomic foundations of this argument, positing a non-monotonic relationship between leverage and firm-level productivity growth in the spirit of the trade-off theory of capital structure. A threshold regression model estimated on a sample of Central and Eastern European countries confirms that TFP growth increases with leverage until the latter reaches a critical threshold beyond which leverage lowers TFP growth. This estimate can provide guidance to firms and policy makers on identifying "excessive" leverage. We find similar non-monotonic relationships between leverage and proxies for firm value. Our results are a first step in bridging the gap between the literature on optimal capital structure and the wider macro literature on the finance-growth nexus. © 2012 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.