928 resultados para Maximum likelihood channel estimation algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse comporte trois articles dont un est publié et deux en préparation. Le sujet central de la thèse porte sur le traitement des valeurs aberrantes représentatives dans deux aspects importants des enquêtes que sont : l’estimation des petits domaines et l’imputation en présence de non-réponse partielle. En ce qui concerne les petits domaines, les estimateurs robustes dans le cadre des modèles au niveau des unités ont été étudiés. Sinha & Rao (2009) proposent une version robuste du meilleur prédicteur linéaire sans biais empirique pour la moyenne des petits domaines. Leur estimateur robuste est de type «plugin», et à la lumière des travaux de Chambers (1986), cet estimateur peut être biaisé dans certaines situations. Chambers et al. (2014) proposent un estimateur corrigé du biais. En outre, un estimateur de l’erreur quadratique moyenne a été associé à ces estimateurs ponctuels. Sinha & Rao (2009) proposent une procédure bootstrap paramétrique pour estimer l’erreur quadratique moyenne. Des méthodes analytiques sont proposées dans Chambers et al. (2014). Cependant, leur validité théorique n’a pas été établie et leurs performances empiriques ne sont pas pleinement satisfaisantes. Ici, nous examinons deux nouvelles approches pour obtenir une version robuste du meilleur prédicteur linéaire sans biais empirique : la première est fondée sur les travaux de Chambers (1986), et la deuxième est basée sur le concept de biais conditionnel comme mesure de l’influence d’une unité de la population. Ces deux classes d’estimateurs robustes des petits domaines incluent également un terme de correction pour le biais. Cependant, ils utilisent tous les deux l’information disponible dans tous les domaines contrairement à celui de Chambers et al. (2014) qui utilise uniquement l’information disponible dans le domaine d’intérêt. Dans certaines situations, un biais non négligeable est possible pour l’estimateur de Sinha & Rao (2009), alors que les estimateurs proposés exhibent un faible biais pour un choix approprié de la fonction d’influence et de la constante de robustesse. Les simulations Monte Carlo sont effectuées, et les comparaisons sont faites entre les estimateurs proposés et ceux de Sinha & Rao (2009) et de Chambers et al. (2014). Les résultats montrent que les estimateurs de Sinha & Rao (2009) et de Chambers et al. (2014) peuvent avoir un biais important, alors que les estimateurs proposés ont une meilleure performance en termes de biais et d’erreur quadratique moyenne. En outre, nous proposons une nouvelle procédure bootstrap pour l’estimation de l’erreur quadratique moyenne des estimateurs robustes des petits domaines. Contrairement aux procédures existantes, nous montrons formellement la validité asymptotique de la méthode bootstrap proposée. Par ailleurs, la méthode proposée est semi-paramétrique, c’est-à-dire, elle n’est pas assujettie à une hypothèse sur les distributions des erreurs ou des effets aléatoires. Ainsi, elle est particulièrement attrayante et plus largement applicable. Nous examinons les performances de notre procédure bootstrap avec les simulations Monte Carlo. Les résultats montrent que notre procédure performe bien et surtout performe mieux que tous les compétiteurs étudiés. Une application de la méthode proposée est illustrée en analysant les données réelles contenant des valeurs aberrantes de Battese, Harter & Fuller (1988). S’agissant de l’imputation en présence de non-réponse partielle, certaines formes d’imputation simple ont été étudiées. L’imputation par la régression déterministe entre les classes, qui inclut l’imputation par le ratio et l’imputation par la moyenne sont souvent utilisées dans les enquêtes. Ces méthodes d’imputation peuvent conduire à des estimateurs imputés biaisés si le modèle d’imputation ou le modèle de non-réponse n’est pas correctement spécifié. Des estimateurs doublement robustes ont été développés dans les années récentes. Ces estimateurs sont sans biais si l’un au moins des modèles d’imputation ou de non-réponse est bien spécifié. Cependant, en présence des valeurs aberrantes, les estimateurs imputés doublement robustes peuvent être très instables. En utilisant le concept de biais conditionnel, nous proposons une version robuste aux valeurs aberrantes de l’estimateur doublement robuste. Les résultats des études par simulations montrent que l’estimateur proposé performe bien pour un choix approprié de la constante de robustesse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study investigates the systematics and evolution of the Neotropical genus Deuterocohnia Mez (Bromeliaceae). It provides a comprehensive taxonomic revision as well as phylogenetic analyses based on chloroplast and nuclear DNA sequences and presents a hypothesis on the evolution of the genus. A broad morphological, anatomical, biogeographical and ecological overview of the genus is given in the first part of the study. For morphological character assessment more than 700 herbarium specimens from 39 herbaria as well as living plant material in the field and in the living collections of botanical gardens were carefully examined. The arid habitats, in which the species of Deuterocohnia grow, are reflected by the morphological and anatomical characters of the species. Important characters for species delimitation were identified, like the length of the inflorescence, the branching order, the density of flowers on partial inflorescences, the relation of the length of the primary bracts to that of the partial inflorescence, the sizes of floral bracts, sepals and petals, flower colour, the presence or absence of a pedicel, the curvature of the stamina and the petals during anthesis. After scrutinizing the nomenclatural history of the taxa belonging to Deuterocohnia – including the 1992 syonymized genus Abromeitiella – 17 species, 4 subspecies and 4 varieties are accepted in the present revision. Taxonomic changes were made in the following cases: (I) New combinations: A. abstrusa (A. Cast.) N. Schütz is re-established – as defined by Castellanos (1931) – and transfered to D. abstrusa; D. brevifolia (Griseb.) M.A. Spencer & L.B. Sm. includes accessions of the former D. lorentziana (Mez) M.A. Spencer & L.B. Sm., which are not assigned to D. abstrusa; D. bracteosa W. Till is synonymized to D. strobilifera Mez; D. meziana Kuntze ex Mez var. carmineo-viridiflora Rauh is classified as a subspecies of D. meziana (ssp. carmineo-viridiflora (Rauh) N. Schütz); D. pedicellata W. Till is classified as a subspecies of D. meziana (ssp. pedicellata (W. Till) N. Schütz); D. scapigera (Rauh & L. Hrom.) M.A. Spencer & L.B. Sm ssp. sanctae-crucis R. Vásquez & Ibisch is classified as a species (D. sanctae-crucis (R. Vásquez & Ibisch) N. Schütz); (II) New taxa: a new subspecies of D. meziana Kuntze ex Mez is established; a new variety of D. scapigera is established; (the new taxa will be validly published elsewhere); (III) New type: an epitype for D. longipetala was chosen. All other species were kept according to Spencer and Smith (1992) or – in the case of more recently described species – according to the protologue. Beside the nomenclatural notes and the detailed descriptions, information on distribution, habitat and ecology, etymology and taxonomic delimitation is provided for the genus and for each of its species. An key was constructed for the identification of currently accepted species, subspecies and varieties. The key is based on easily detectable morphological characters. The former synonymization of the genus Abromeitiella into Deuterocohnia (Spencer and Smith 1992) is re-evalutated in the present study. Morphological as well as molecular investigations revealed Deuterocohnia incl. Abromeitiella as being monophyletic, with some indications that a monophyletic Abromeitiella lineage arose from within Deuterocohnia. Thus the union of both genera is confirmed. The second part of the present thesis describes and discusses the molecular phylogenies and networks. Molecular analyses of three chloroplast intergenic spacers (rpl32-trnL, rps16-trnK, trnS-ycf3) were conducted with a sample set of 119 taxa. This set included 103 Deuterocohnia accessions from all 17 described species of the genus and 16 outgroup taxa from the remainder of Pitcairnioideae s.str. (Dyckia (8 sp.), Encholirium (2 sp.), Fosterella (4 sp.) and Pitcairnia (2 sp.)). With its high sampling density, the present investigation by far represents the most comprehensive molecular study of Deuterocohnia up till now. All data sets were analyzed separately as well as in combination, and various optimality criteria for phylogenetic tree construction were applied (Maximum Parsimony, Maximum Likelihood, Bayesian inferences and the distance method Neighbour Joining). Congruent topologies were generally obtained with different algorithms and optimality criteria, but individual clades received different degrees of statistical support in some analyses. The rps16-trnK locus was the most informative among the three spacer regions examined. The results of the chloroplast DNA analyses revealed a highly supported paraphyly of Deuterocohnia. Thus, the cpDNA trees divide the genus into two subclades (A and B), of which Deuterocohnia subclade B is sister to the included Dyckia and Encholirium accessions, and both together are sister to Deuterocohnia subclade A. To further examine the relationship between Deuterocohnia and Dyckia/Encholirium at the generic level, two nuclear low copy markers (PRK exon2-5 and PHYC exon1) were analysed with a reduced taxon set. This set included 22 Deuterocohnia accessions (including members of both cpDNA subclades), 2 Dyckia, 2 Encholirium and 2 Fosterella species. Phylogenetic trees were constructed as described above, and for comparison the same reduced taxon set was also analysed at the three cpDNA data loci. In contrast to the cpDNA results, the nuclear DNA data strongly supported the monophyly of Deuterocohnia, which takes a sister position to a clade of Dyckia and Encholirium samples. As morphology as well as nuclear DNA data generated in the present study and in a former AFLP analysis (Horres 2003) all corroborate the monophyly of Deuterocohnia, the apparent paraphyly displayed in cpDNA analyses is interpreted to be the consequence of a chloroplast capture event. This involves the introgression of the chloroplast genome from the common ancestor of the Dyckia/ Encholirium lineage into the ancestor of Deuterocohnia subclade B species. The chloroplast haplotypes are not species-specific in Deuterocohnia. Thus, one haplotype was sometimes shared by several species, where the same species may harbour different haplotypes. The arrangement of haplotypes followed geographical patterns rather than taxonomic boundaries, which may indicate some residual gene flow among populations from different Deuteroccohnia species. Phenotypic species coherence on the background of ongoing gene flow may then be maintained by sets of co-adapted alleles, as was suggested by the porous genome concept (Wu 2001, Palma-Silva et al. 2011). The results of the present study suggest the following scenario for the evolution of Deuterocohnia and its species. Deuterocohnia longipetala may be envisaged as a representative of the ancestral state within the genus. This is supported by (1) the wide distribution of this species; (2) the overlap in distribution area with species of Dyckia; (3) the laxly flowered inflorescences, which are also typical for Dyckia; (4) the yellow petals with a greenish tip, present in most other Deuterocohnia species. The following six extant lineages within Deuterocohnia might have independently been derived from this ancestral state with a few changes each: (I) D. meziana, D. brevispicata and D. seramisiana (Bolivia, lowland to montane areas, mostly reddish-greenish coloured, very laxly to very densely flowered); (II) D. strobilifera (Bolivia, high Andean mountains, yellow flowers, densely flowered); (III) D. glandulosa (Bolivia, montane areas, yellow-greenish flowers, densely flowered); (IV) D. haumanii, D. schreiteri, D. digitata, and D. chrysantha (Argentina, Chile, E Andean mountains and Atacama desert, yellow-greenish flowers, densely flowered); (V) D. recurvipetala (Argentina, foothills of the Andes, recurved yellow flowers, laxly flowered); (VI) D. gableana, D. scapigera, D. sanctae-crucis, D. abstrusa, D. brevifolia, D. lotteae (former Abromeitiella species, Bolivia, Argentina, higher Andean mountains, greenish-yellow flowers, inflorescence usually simple). Originating from the lower montane Andean regions, at least four lineages of the genus (I, II, IV, VI) adapted in part to higher altitudes by developing densely flowered partial inflorescences, shorter flowers and – in at least three lineages (II, IV, VI) – smaller rosettes, whereas species spreading into the lowlands (I, V) developed larger plants, laxly flowered, amply branched inflorescences and in part larger flowers (I).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose and estimate a financial distress model that explicitly accounts for the interactions or spill-over effects between financial institutions, through the use of a spatial continuity matrix that is build from financial network data of inter bank transactions. Such setup of the financial distress model allows for the empirical validation of the importance of network externalities in determining financial distress, in addition to institution specific and macroeconomic covariates. The relevance of such specification is that it incorporates simultaneously micro-prudential factors (Basel 2) as well as macro-prudential and systemic factors (Basel 3) as determinants of financial distress. Results indicate network externalities are an important determinant of financial health of a financial institutions. The parameter that measures the effect of network externalities is both economically and statistical significant and its inclusion as a risk factor reduces the importance of the firm specific variables such as the size or degree of leverage of the financial institution. In addition we analyze the policy implications of the network factor model for capital requirements and deposit insurance pricing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[1] In many practical situations where spatial rainfall estimates are needed, rainfall occurs as a spatially intermittent phenomenon. An efficient geostatistical method for rainfall estimation in the case of intermittency has previously been published and comprises the estimation of two independent components: a binary random function for modeling the intermittency and a continuous random function that models the rainfall inside the rainy areas. The final rainfall estimates are obtained as the product of the estimates of these two random functions. However the published approach does not contain a method for estimation of uncertainties. The contribution of this paper is the presentation of the indicator maximum likelihood estimator from which the local conditional distribution of the rainfall value at any location may be derived using an ensemble approach. From the conditional distribution, representations of uncertainty such as the estimation variance and confidence intervals can be obtained. An approximation to the variance can be calculated more simply by assuming rainfall intensity is independent of location within the rainy area. The methodology has been validated using simulated and real rainfall data sets. The results of these case studies show good agreement between predicted uncertainties and measured errors obtained from the validation data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the G(A)(0) distribution is assumed as the universal model for amplitude Synthetic Aperture (SAR) imagery data under the Multiplicative Model. The observed data, therefore, is assumed to obey a G(A)(0) (alpha; gamma, n) law, where the parameter n is related to the speckle noise, and (alpha, gamma) are related to the ground truth, giving information about the background. Therefore, maps generated by the estimation of (alpha, gamma) in each coordinate can be used as the input for classification methods. Maximum likelihood estimators are derived and used to form estimated parameter maps. This estimation can be hampered by the presence of corner reflectors, man-made objects used to calibrate SAR images that produce large return values. In order to alleviate this contamination, robust (M) estimators are also derived for the universal model. Gaussian Maximum Likelihood classification is used to obtain maps using hard-to-deal-with simulated data, and the superiority of robust estimation is quantitatively assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS). compensation. for block base motion On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduce hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms. Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for block base motion compensation. On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduced hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms, Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Bayesian method of estimating multivariate sample selection models is introduced and applied to the estimation of a demand system for food in the UK to account for censoring arising from infrequency of purchase. We show how it is possible to impose identifying restrictions on the sample selection equations and that, unlike a maximum likelihood framework, the imposition of adding up at both latent and observed levels is straightforward. Our results emphasise the role played by low incomes and socio-economic circumstances in leading to poor diets and also indicate that the presence of children in a household has a negative impact on dietary quality.