984 resultados para regression algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust regression in statistics leads to challenging optimization problems. Here, we study one such problem, in which the objective is non-smooth, non-convex and expensive to calculate. We study the numerical performance of several derivative-free optimization algorithms with the aim of computing robust multivariate estimators. Our experiences demonstrate that the existing algorithms often fail to deliver optimal solutions. We introduce three new methods that use Powell's derivative-free algorithm. The proposed methods are reliable and can be used when processing very large data sets containing outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Karnik-Mendel (KM) algorithm is the most used and researched type reduction (TR) algorithm in literature. This algorithm is iterative in nature and despite consistent long term effort, no general closed form formula has been found to replace this computationally expensive algorithm. In this research work, we demonstrate that the outcome of KM algorithm can be approximated by simple linear regression techniques. Since most of the applications will have a fixed range of inputs with small scale variations, it is possible to handle those complexities in design phase and build a fuzzy logic system (FLS) with low run time computational burden. This objective can be well served by the application of regression techniques. This work presents an overview of feasibility of regression techniques for design of data-driven type reducers while keeping the uncertainty bound in FLS intact. Simulation results demonstrates the approximation error is less than 2%. Thus our work preserve the essence of Karnik-Mendel algorithm and serves the requirement of low
computational complexities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Karnik-Mendel (KM) algorithm is the most widely used type reduction (TR) method in literature for the design of interval type-2 fuzzy logic systems (IT2FLS). Its iterative nature for finding left and right switch points is its Achilles heel. Despite a decade of research, none of the alternative TR methods offer uncertainty measures equivalent to KM algorithm. This paper takes a data-driven approach to tackle the computational burden of this algorithm while keeping its key features. We propose a regression method to approximate left and right switch points found by KM algorithm. Approximator only uses the firing intervals, rnles centroids, and FLS strnctural features as inputs. Once training is done, it can precisely approximate the left and right switch points through basic vector multiplications. Comprehensive simulation results demonstrate that the approximation accuracy for a wide variety of FLSs is 100%. Flexibility, ease of implementation, and speed are other features of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a new modeling method, support vector regression (SVR) has been regarded as the state-of-the-art technique for regression and approximation. In this study, the SVR models had been introduced and developed to predict body and carcass-related characteristics of 2 strains of broiler chicken. To evaluate the prediction ability of SVR models, we compared their performance with that of neural network (NN) models. Evaluation of the prediction accuracy of models was based on the R-2, MS error, and bias. The variables of interest as model output were BW, empty BW, carcass, breast, drumstick, thigh, and wing weight in 2 strains of Ross and Cobb chickens based on intake dietary nutrients, including ME (kcal/bird per week), CP, TSAA, and Lys, all as grams per bird per week. A data set composed of 64 measurements taken from each strain were used for this analysis, where 44 data lines were used for model training, whereas the remaining 20 lines were used to test the created models. The results of this study revealed that it is possible to satisfactorily estimate the BW and carcass parts of the broiler chickens via their dietary nutrient intake. Through statistical criteria used to evaluate the performance of the SVR and NN models, the overall results demonstrate that the discussed models can be effective for accurate prediction of the body and carcass-related characteristics investigated here. However, the SVR method achieved better accuracy and generalization than the NN method. This indicates that the new data mining technique (SVR model) can be used as an alternative modeling tool for NN models. However, further reevaluation of this algorithm in the future is suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Brazilian Association of Simmental and Simbrasil Cattle Farmers provided 29,510 records from 10,659 Simmental beef cattle; these were used to estimate (co)variance components and genetic parameters for weights in the growth trajectory, based on multi-trait (MTM) and random regression models (RRM). The (co)variance components and genetic parameters were estimated by restricted maximum likelihood. In the MTM analysis, the likelihood ratio test was used to determine the significance of random effects included in the model and to define the most appropriate model. All random effects were significant and included in the final model. In the RRM analysis, different adjustments of polynomial orders were compared for 5 different criteria to choose the best fit model. An RRM of third order for the direct additive genetic, direct permanent environmental, maternal additive genetic, and maternal permanent environment effects was sufficient to model variance structures in the growth trajectory of the animals. The (co)variance components were generally similar in MTM and RRM. Direct heritabilities of MTM were slightly lower than RRM and varied from 0.04 to 0.42 and 0.16 to 0.45, respectively. Additive direct correlations were mostly positive and of high magnitude, being highest at closest ages. Considering the results and that pre-adjustment of the weights to standard ages is not required, RRM is recommended for genetic evaluation of Simmental beef cattle in Brazil. ©FUNPEC-RP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. This novel class of models provides a useful generalization of the heteroscedastic symmetrical nonlinear regression models (Cysneiros et al., 2010), since the random term distributions cover both symmetric as well as asymmetric and heavy-tailed distributions such as skew-t, skew-slash, skew-contaminated normal, among others. A simple EM-type algorithm for iteratively computing maximum likelihood estimates of the parameters is presented and the observed information matrix is derived analytically. In order to examine the performance of the proposed methods, some simulation studies are presented to show the robust aspect of this flexible class against outlying and influential observations and that the maximum likelihood estimates based on the EM-type algorithm do provide good asymptotic properties. Furthermore, local influence measures and the one-step approximations of the estimates in the case-deletion model are obtained. Finally, an illustration of the methodology is given considering a data set previously analyzed under the homoscedastic skew-t nonlinear regression model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: We aimed to investigate the performance of five different trend analysis criteria for the detection of glaucomatous progression and to determine the most frequently and rapidly progressing locations of the visual field. Design: Retrospective cohort. Participants or Samples: Treated glaucoma patients with =8 Swedish Interactive Thresholding Algorithm (SITA)-standard 24-2 visual field tests. Methods: Progression was determined using trend analysis. Five different criteria were used: (A) =1 significantly progressing point; (B) =2 significantly progressing points; (C) =2 progressing points located in the same hemifield; (D) at least two adjacent progressing points located in the same hemifield; (E) =2 progressing points in the same Garway-Heath map sector. Main Outcome Measures: Number of progressing eyes and false-positive results. Results: We included 587 patients. The number of eyes reaching a progression endpoint using each criterion was: A = 300 (51%); B = 212 (36%); C = 194 (33%); D = 170 (29%); and E = 186 (31%) (P = 0.03). The numbers of eyes with positive slopes were: A = 13 (4.3%); B = 3 (1.4%); C = 3 (1.5%); D = 2 (1.1%); and E = 3 (1.6%) (P = 0.06). The global slopes for progressing eyes were more negative in Groups B, C and D than in Group A (P = 0.004). The visual field locations that progressed more often were those in the nasal field adjacent to the horizontal midline. Conclusions: Pointwise linear regression criteria that take into account the retinal nerve fibre layer anatomy enhances the specificity of trend analysis for the detection glaucomatous visual field progression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model diagnostics is an integral part of model determination and an important part of the model diagnostics is residual analysis. We adapt and implement residuals considered in the literature for the probit, logistic and skew-probit links under binary regression. New latent residuals for the skew-probit link are proposed here. We have detected the presence of outliers using the residuals proposed here for different models in a simulated dataset and a real medical dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box.  Output: dato proveniente dal sensore ottico (lettura espressa in mV)  Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical approaches to evaluate higher order SNP-SNP and SNP-environment interactions are critical in genetic association studies, as susceptibility to complex disease is likely to be related to the interaction of multiple SNPs and environmental factors. Logic regression (Kooperberg et al., 2001; Ruczinski et al., 2003) is one such approach, where interactions between SNPs and environmental variables are assessed in a regression framework, and interactions become part of the model search space. In this manuscript we extend the logic regression methodology, originally developed for cohort and case-control studies, for studies of trios with affected probands. Trio logic regression accounts for the linkage disequilibrium (LD) structure in the genotype data, and accommodates missing genotypes via haplotype-based imputation. We also derive an efficient algorithm to simulate case-parent trios where genetic risk is determined via epistatic interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible- Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2 mm for femur and 1.9 mm for pelvis.