964 resultados para latent class
Resumo:
It is often debated whether migraine with aura (MA) and migraine without aura (MO) are etiologically distinct disorders. A previous study using latent class analysis (LCA) in Australian twins showed no evidence for separate subtypes of MO and MA. The aim of the present study was to replicate these results in a population of Dutch twins and their parents, siblings and partners (N = 10,144). Latent class analysis of International Headache Society (IHS)-based migraine symptoms resulted in the identification of 4 classes: a class of unaffected subjects (class 0), a mild form of nonmigrainous headache (class 1), a moderately severe type of migraine (class 2), typically without neurological symptoms or aura (8% reporting aura symptoms), and a severe type of migraine (class 3), typically with neurological symptoms, and aura symptoms in approximately half of the cases. Given the overlap of neurological symptoms and nonmutual exclusivity of aura symptoms, these results do not support the MO and MA subtypes as being etiologically distinct. The heritability in female twins of migraine based on LCA classification was estimated at .50 (95% confidence intervals [0CI} .27 -.59), similar to IHS-based migraine diagnosis (h(2) = .49, 95% Cl .19-.57). However, using a dichotomous classification (affected-unaffected) decreased heritability for the IHS-based classification (h(2) = .33, 95% Cl .00-.60), but not the LCA-based classification (h(2) = .51, 95% Cl. 23-.61). Importantly, use of the LCA-based classification increased the number of subjects classified as affected. The heritability of the screening question was similar to more detailed LCA and IHS classifications, suggesting that the screening procedure is an important determining factor in genetic studies of migraine.
Resumo:
While many offline retailers have developed informational websites that offer information on products and prices, the key question for such informational websites is whether they can increase revenues via web-to-store shopping. The current paper draws on the information search literature to specify and test hypotheses regarding the offline revenue impact of adding an informational website. Explicitly considering marketing efforts, a latent class model distinguishes consumer segments with different short-term revenue effects, while a Vector Autoregressive model on these segments reveals different long-term marketing response. We find that the offline revenue impact of the informational website critically depends on the product category and customer segment. The lower online search costs are especially beneficial for sensory products and for customers distant from the store. Moreover, offline revenues increase most for customers with high web visit frequency. We find that customers in some segments buy more and more expensive products, suggesting that online search and offline purchases are complements. In contrast, customers in a particular segment reduce their shopping trips, suggesting their online activities partially substitute for experiential shopping in the physical store. Hence, offline retailers should use specific online activities to target specific product categories and customer segments.
Resumo:
A cikkben paneladatok segítségével a magyar gabonatermesztő üzemek 2001 és 2009 közötti technikai hatékonyságát vizsgáljuk. A technikai hatékonyság szintjének becslésére egy hagyományos sztochasztikus határok modell (SFA) mellett a látens csoportok modelljét (LCM) használjuk, amely figyelembe veszi a technológiai különbségeket is. Eredményeink arra utalnak, hogy a technológiai heterogenitás fontos lehet egy olyan ágazatban is, mint a szántóföldi növénytermesztés, ahol viszonylag homogén technológiát alkalmaznak. A hagyományos, azonos technológiát feltételező és a látens osztályok modelljeinek összehasonlítása azt mutatja, hogy a gabonatermesztő üzemek technikai hatékonyságát a hagyományos modellek alábecsülhetik. _____ The article sets out to analyse the technical efficiency of Hungarian crop farms between 2001 and 2009, using panel data and employing both standard stochastic frontier analysis and the latent class model (LCM) to estimate technical efficiency. The findings suggest that technological heterogeneity plays an important role in the crop sector, though it is traditionally assumed to employ homogenous technology. A comparison of standard SFA models that assumes the technology is common to all farms and LCM estimates highlights the way the efficiency of crop farms can be underestimated using traditional SFA models.
Resumo:
The study aims to analyze the content and measures of accuracy of the nursing diagnosis Ineffective Self Health in patients undergoing hemodialysis. Study of nursing diagnosis validation carried out in two stages, namely: content analysis by judges and accuracy of clinical indicators. In the first stage, 22 judges evaluated the setting and location of the diagnosis, clinical indicators and etiological factors and their conceptual and empirical definitions. We used the binomial test to determine the proportion of the judges of the relevance of the components of the nursing diagnosis. In the second stage, we used the Latent Class Analysis for the diagnostic accuracy by evaluating 200 patients in a hemodialysis clinic in northeastern Brazil. Research approved by the Ethics Committee, under the Opinion No 387 837 and CAAE 18486413.0.0000.5537. The results show that the judges evaluated as pertinent clinical indicators 12 and 22 etiological factors. Proposed amendment of the nomenclature of five indicators and six factors and the implementation of a clinical indicator for etiology and three etiological factors for clinical indicators. In conceptual and empirical definitions, judges judged as not relevant the conceptual and empirical definitions of a clinical indicator, the conceptual definitions of two etiological factors and empirical definitions four etiological factors. Still, changes were suggested in the conceptual and empirical definitions of two clinical indicators, the conceptual definitions of 12 etiological factors and empirical definitions of 11 etiological factors. Clinical indicators analyzed in the first stage were validated clinically in patients undergoing hemodialysis. The most frequent clinical indicators were Changes in laboratory tests (100%) and daily life choices ineffective to achieve health goals (81%); and three etiological factors had a higher frequency, they are: unfavorable demographic factors (94.5%), beliefs (79%) and comorbidities (77.5%). From Latent class analysis, diagnosis prevalence was estimated at 66.28%. Clinical indicators that showed the best sensitivity measures for the nursing diagnosis Ineffective Self Health were: daily life choices ineffective to achieve health goals and Expression of difficulty with prescribed regimens. In turn, the clinical indicators of inappropriate medication use, no expression of desire to control the disease, irregular attendance to the dialysis sessions and infection were more specific as to that diagnosis. Non-adherence to treatment was the only indicator that showed confidence intervals with values for sensitivity and specificity, statistically above 0.5, being the one who has better diagnostic accuracy as the inference of the nursing diagnosis Ineffective Self Health in hemodialysis clientele. Thus, it is believed that the improvement of the components of diagnosis in question will contribute to the development of more reliable nursing interventions to the health status of the individual in hemodialysis, providing a more scientifically qualified care.
Resumo:
The study aims to analyze the content and measures of accuracy of the nursing diagnosis Ineffective Self Health in patients undergoing hemodialysis. Study of nursing diagnosis validation carried out in two stages, namely: content analysis by judges and accuracy of clinical indicators. In the first stage, 22 judges evaluated the setting and location of the diagnosis, clinical indicators and etiological factors and their conceptual and empirical definitions. We used the binomial test to determine the proportion of the judges of the relevance of the components of the nursing diagnosis. In the second stage, we used the Latent Class Analysis for the diagnostic accuracy by evaluating 200 patients in a hemodialysis clinic in northeastern Brazil. Research approved by the Ethics Committee, under the Opinion No 387 837 and CAAE 18486413.0.0000.5537. The results show that the judges evaluated as pertinent clinical indicators 12 and 22 etiological factors. Proposed amendment of the nomenclature of five indicators and six factors and the implementation of a clinical indicator for etiology and three etiological factors for clinical indicators. In conceptual and empirical definitions, judges judged as not relevant the conceptual and empirical definitions of a clinical indicator, the conceptual definitions of two etiological factors and empirical definitions four etiological factors. Still, changes were suggested in the conceptual and empirical definitions of two clinical indicators, the conceptual definitions of 12 etiological factors and empirical definitions of 11 etiological factors. Clinical indicators analyzed in the first stage were validated clinically in patients undergoing hemodialysis. The most frequent clinical indicators were Changes in laboratory tests (100%) and daily life choices ineffective to achieve health goals (81%); and three etiological factors had a higher frequency, they are: unfavorable demographic factors (94.5%), beliefs (79%) and comorbidities (77.5%). From Latent class analysis, diagnosis prevalence was estimated at 66.28%. Clinical indicators that showed the best sensitivity measures for the nursing diagnosis Ineffective Self Health were: daily life choices ineffective to achieve health goals and Expression of difficulty with prescribed regimens. In turn, the clinical indicators of inappropriate medication use, no expression of desire to control the disease, irregular attendance to the dialysis sessions and infection were more specific as to that diagnosis. Non-adherence to treatment was the only indicator that showed confidence intervals with values for sensitivity and specificity, statistically above 0.5, being the one who has better diagnostic accuracy as the inference of the nursing diagnosis Ineffective Self Health in hemodialysis clientele. Thus, it is believed that the improvement of the components of diagnosis in question will contribute to the development of more reliable nursing interventions to the health status of the individual in hemodialysis, providing a more scientifically qualified care.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
This study examines the business model complexity of Irish credit unions using a latent class approach to measure structural performance over the period 2002 to 2013. The latent class approach allows the endogenous identification of a multi-class framework for business models based on credit union specific characteristics. The analysis finds a three class system to be appropriate with the multi-class model dependent on three financial viability characteristics. This finding is consistent with the deliberations of the Irish Commission on Credit Unions (2012) which identified complexity and diversity in the business models of Irish credit unions and recommended that such complexity and diversity could not be accommodated within a one size fits all regulatory framework. The analysis also highlights that two of the classes are subject to diseconomies of scale. This may suggest credit unions would benefit from a reduction in scale or perhaps that there is an imbalance in the present change process. Finally, relative performance differences are identified for each class in terms of technical efficiency. This suggests that there is an opportunity for credit unions to improve their performance by using within-class best practice or alternatively by switching to another class.
Resumo:
Au Sénégal, les maladies diarrhéiques constituent un fardeau important, qui pèse encore lourdement sur la santé des enfants. Ces maladies sont influencées par un large éventail de facteurs, appartenant à différents niveaux et sphères d'analyse. Cet article analyse ces facteurs de risque et leur rôle relatif dans les maladies diarrhéiques de l'enfant à Dakar. Ce faisant, elle illustre une nouvelle approche pour synthétiser le réseau de ces déterminants. Une analyse en classes latentes (LCA) est d’abord menée, puis les variables latentes ainsi construites sont utilisées comme variables explicatives dans une régression logistique sur trois niveaux. Les résultats confirment que les déterminants des diarrhées chez l'enfant appartiennent aux trois niveaux d'analyse et que les facteurs comportementaux et l'assainissement du quartier jouent un rôle prépondérant. Les résultats illustrent aussi l'utilité des LCA pour synthétiser plusieurs indicateurs, afin de créer une image causale intégrée, tout en utilisant des modèles statistiques parcimonieux.
Resumo:
Leishmaniasis, caused by Leishmania infantum, is a vector-borne zoonotic disease that is endemic to the Mediterranean basin. The potential of rabbits and hares to serve as competent reservoirs for the disease has recently been demonstrated, although assessment of the importance of their role on disease dynamics is hampered by the absence of quantitative knowledge on the accuracy of diagnostic techniques in these species. A Bayesian latent-class model was used here to estimate the sensitivity and specificity of the Immuno-fluorescence antibody test (IFAT) in serum and a Leishmania-nested PCR (Ln-PCR) in skin for samples collected from 217 rabbits and 70 hares from two different populations in the region of Madrid, Spain. A two-population model, assuming conditional independence between test results and incorporating prior information on the performance of the tests in other animal species obtained from the literature, was used. Two alternative cut-off values were assumed for the interpretation of the IFAT results: 1/50 for conservative and 1/25 for sensitive interpretation. Results suggest that sensitivity and specificity of the IFAT were around 70–80%, whereas the Ln-PCR was highly specific (96%) but had a limited sensitivity (28.9% applying the conservative interpretation and 21.3% with the sensitive one). Prevalence was higher in the rabbit population (50.5% and 72.6%, for the conservative and sensitive interpretation, respectively) than in hares (6.7% and 13.2%). Our results demonstrate that the IFAT may be a useful screening tool for diagnosis of leishmaniasis in rabbits and hares. These results will help to design and implement surveillance programmes in wild species, with the ultimate objective of early detecting and preventing incursions of the disease into domestic and human populations.
Resumo:
Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.
Resumo:
Song-selection and mood are interdependent. If we capture a song’s sentiment, we can determine the mood of the listener, which can serve as a basis for recommendation systems. Songs are generally classified according to genres, which don’t entirely reflect sentiments. Thus, we require an unsupervised scheme to mine them. Sentiments are classified into either two (positive/negative) or multiple (happy/angry/sad/...) classes, depending on the application. We are interested in analyzing the feelings invoked by a song, involving multi-class sentiments. To mine the hidden sentimental structure behind a song, in terms of “topics”, we consider its lyrics and use Latent Dirichlet Allocation (LDA). Each song is a mixture of moods. Topics mined by LDA can represent moods. Thus we get a scheme of collecting similar-mood songs. For validation, we use a dataset of songs containing 6 moods annotated by users of a particular website.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
Scene classification based on latent Dirichlet allocation (LDA) is a more general modeling method known as a bag of visual words, in which the construction of a visual vocabulary is a crucial quantization process to ensure success of the classification. A framework is developed using the following new aspects: Gaussian mixture clustering for the quantization process, the use of an integrated visual vocabulary (IVV), which is built as the union of all centroids obtained from the separate quantization process of each class, and the usage of some features, including edge orientation histogram, CIELab color moments, and gray-level co-occurrence matrix (GLCM). The experiments are conducted on IKONOS images with six semantic classes (tree, grassland, residential, commercial/industrial, road, and water). The results show that the use of an IVV increases the overall accuracy (OA) by 11 to 12% and 6% when it is implemented on the selected and all features, respectively. The selected features of CIELab color moments and GLCM provide a better OA than the implementation over CIELab color moment or GLCM as individuals. The latter increases the OA by only ∼2 to 3%. Moreover, the results show that the OA of LDA outperforms the OA of C4.5 and naive Bayes tree by ∼20%. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.8.083690]
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.