46 resultados para large-scale optimization
Resumo:
What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default 'Hobbesian' rules of the 'game of life', determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter-gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization.
Resumo:
INTRODUCTION: The aim of this study was to test the diagnostic value of cerebrospinal fluid (CSF) beta-amyloid (Aβ1-42), phosphorylated tau, and total tau (tau) to discriminate Alzheimer's disease (AD) dementia from other forms of dementia. METHODS: A total of 675 CSF samples collected at eight memory clinics were obtained from healthy controls, AD dementia, subjective memory impairment, mild cognitive impairment, vascular dementia, Lewy body dementia (LBD), fronto-temporal dementia (FTD), depression, or other neurological diseases. RESULTS: CSF Aβ1-42 showed the best diagnostic accuracy among the CSF biomarkers. At a sensitivity of 85%, the specificity to differentiate AD dementia against other diagnoses ranged from 42% (for LBD, 95% confidence interval or CI = 32-62) to 77% (for FTD, 95% CI = 62-90). DISCUSSION: CSF Aβ1-42 discriminates AD dementia from FTD, but shows significant overlap with other non-AD forms of dementia, possibly reflecting the underlying mixed pathologies.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
MOTIVATION: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. RESULTS: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total).Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. CONTACT: selectome@unil.ch or nicolas.salamin@unil.ch.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
BACKGROUND: The Complete Arabidopsis Transcript MicroArray (CATMA) initiative combines the efforts of laboratories in eight European countries 1 to deliver gene-specific sequence tags (GSTs) for the Arabidopsis research community. The CATMA initiative offers the power and flexibility to regularly update the GST collection according to evolving knowledge about the gene repertoire. These GST amplicons can easily be reamplified and shared, subsets can be picked at will to print dedicated arrays, and the GSTs can be cloned and used for other functional studies. This ongoing initiative has already produced approximately 24,000 GSTs that have been made publicly available for spotted microarray printing and RNA interference. RESULTS: GSTs from the CATMA version 2 repertoire (CATMAv2, created in 2002) were mapped onto the gene models from two independent Arabidopsis nuclear genome annotation efforts, TIGR5 and PSB-EuGène, to consolidate a list of genes that were targeted by previously designed CATMA tags. A total of 9,027 gene models were not tagged by any amplified CATMAv2 GST, and 2,533 amplified GSTs were no longer predicted to tag an updated gene model. To validate the efficacy of GST mapping criteria and design rules, the predicted and experimentally observed hybridization characteristics associated to GST features were correlated in transcript profiling datasets obtained with the CATMAv2 microarray, confirming the reliability of this platform. To complete the CATMA repertoire, all 9,027 gene models for which no GST had yet been designed were processed with an adjusted version of the Specific Primer and Amplicon Design Software (SPADS). A total of 5,756 novel GSTs were designed and amplified by PCR from genomic DNA. Together with the pre-existing GST collection, this new addition constitutes the CATMAv3 repertoire. It comprises 30,343 unique amplified sequences that tag 24,202 and 23,009 protein-encoding nuclear gene models in the TAIR6 and EuGène genome annotations, respectively. To cover the remaining untagged genes, we identified 543 additional GSTs using less stringent design criteria and designed 990 sequence tags matching multiple members of gene families (Gene Family Tags or GFTs) to cover any remaining untagged genes. These latter 1,533 features constitute the CATMAv4 addition. CONCLUSION: To update the CATMA GST repertoire, we designed 7,289 additional sequence tags, bringing the total number of tagged TAIR6-annotated Arabidopsis nuclear protein-coding genes to 26,173. This resource is used both for the production of spotted microarrays and the large-scale cloning of hairpin RNA silencing vectors. All information about the resulting updated CATMA repertoire is available through the CATMA database http://www.catma.org.
Resumo:
A first assessment of debris flow susceptibility at a large scale was performed along the National Road N7, Argentina. Numerous catchments are prone to debris flows and likely to endanger the road-users. A 1:50,000 susceptibility map was created. The use of a DEM (grid 30 m) associated to three complementary criteria (slope, contributing area, curvature) allowed the identification of potential source areas. The debris flow spreading was estimated using a process- and GISbased model (Flow-R) based on basic probabilistic and energy calculations. The best-fit values for the coefficient of friction and the mass-to-drag ratio of the PCM model were found to be ? = 0.02 and M/D = 180 and the resulting propagation on one of the calibration site was validated using the Coulomb friction model. The results are realistic and will be useful to determine which areas need to be prioritized for detailed studies.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
To date, published studies of alluvial bar architecture in large rivers have been restricted mostly to case studies of individual bars and single locations. Relatively little is known about how the depositional processes and sedimentary architecture of kilometre-scale bars vary within a multi-kilometre reach or over several hundreds of kilometres downstream. This study presents Ground Penetrating Radar and core data from 11, kilometre-scale bars from the Rio Parana, Argentina. The investigated bars are located between 30km upstream and 540km downstream of the Rio Parana - Rio Paraguay confluence, where a significant volume of fine-grained suspended sediment is introduced into the network. Bar-scale cross-stratified sets, with lengths and widths up to 600m and thicknesses up to 12m, enable the distinction of large river deposits from stacked deposits of smaller rivers, but are only present in half the surface area of the bars. Up to 90% of bar-scale sets are found on top of finer-grained ripple-laminated bar-trough deposits. Bar-scale sets make up as much as 58% of the volume of the deposits in small, incipient mid-channel bars, but this proportion decreases significantly with increasing age and size of the bars. Contrary to what might be expected, a significant proportion of the sedimentary structures found in the Rio Parana is similar in scale to those found in much smaller rivers. In other words, large river deposits are not always characterized by big structures that allow a simple interpretation of river scale. However, the large scale of the depositional units in big rivers causes small-scale structures, such as ripple sets, to be grouped into thicker cosets, which indicate river scale even when no obvious large-scale sets are present. The results also show that the composition of bars differs between the studied reaches upstream and downstream of the confluence with the Rio Paraguay. Relative to other controls on downstream fining, the tributary input of fine-grained suspended material from the Rio Paraguay causes a marked change in the composition of the bar deposits. Compared to the upstream reaches, the sedimentary architecture of the downstream reaches in the top ca 5m of mid-channel bars shows: (i) an increase in the abundance and thickness (up to metre-scale) of laterally extensive (hundreds of metres) fine-grained layers; (ii) an increase in the percentage of deposits comprised of ripple sets (to >40% in the upper bar deposits); and (iii) an increase in bar-trough deposits and a corresponding decrease in bar-scale cross-strata (<10%). The thalweg deposits of the Rio Parana are composed of dune sets, even directly downstream from the Rio Paraguay where the upper channel deposits are dominantly fine-grained. Thus, the change in sedimentary facies due to a tributary point-source of fine-grained sediment is primarily expressed in the composition of the upper bar deposits.
Resumo:
Background Biological rhythmicity has been extensively studied in animals for many decades. Although temporal patterns of physical activity have been identified in humans, no large-scale, multi-national study has been published, and no comparison has been attempted of the ubiquity of activity rhythms at different time scales (such as daily, weekly, monthly, and annual). Methods Using individually worn actigraphy devices, physical activity of 2,328 individuals from five different countries (adults of African descent from Ghana, South Africa, Jamaica, Seychelles, and the United States) was measured for seven consecutive days at different times of the year. Results Analysis for rhythmic patterns identified daily rhythmicity of physical activity in all five of the represented nationalities. Weekly rhythmicity was found in some, but not all, of the nationalities. No significant evidence of lunar rhythmicity or seasonal rhythmicity was found in any of the groups. Conclusions These findings extend previous small-scale observations of daily rhythmicity to a large cohort of individuals from around the world. The findings also confirm the existence of modest weekly rhythmicity but not lunar or seasonal rhythmicity in human activity. These differences in rhythm strength have implications for the management of health hazards of rhythm misalignment. Key Messages Analysis of the pattern of physical activity of 2,328 individuals from five countries revealed strong daily rhythmicity in all five countries, moderate weekly rhythmicity in some countries, and no lunar rhythmicity or seasonal rhythmicity in any of the countries.
Resumo:
On a geological time scale the conditions on earth are very variable and biological patterns (for example the distributions of species) are very dynamic. Understanding large scale patterns of variation observed today thus requires a deep understanding of the historical factors that drove their evolution. In this thesis, we reevaluated the evolution and maintenance of a continental color cline observed in the European barn owl (Tyto alba) using population genetic tools. The colour cline spans from south-est Europe where most individual have pure white underparts to north and east Europe where most individuals have rufous-brown underparts. Our results globally showed that the old scenario, stipulating that the color cline evolved by secondary contact of two color morphs (white and rufous) that evolved in allopatry during the last ice age has to be revised. We collected samples of about 700 barn owls from the Western Palearctic to establish the first population genetic data set for this species. Individuals were genotyped at 22 microsatellites markers, at one mitochondrial gene, and at a candidate color gene. The color of each individuals was assessed and their sex determined by molecular methods. We first showed that the genetic variation in Western Europe is very limited compared to the heritable color variation. We found no evidences of different glacial lineages, and showed that selection must be involved in the maintenance of the color cline (chapter 1). Using computer simulations, we demonstrated that the post-glacial colonization of Europe occurred from the Iberian Peninsula and that the color cline could not have evolved by neutral demographic processes during this colonization (chapter 2). Finally we reevaluated the whole history of the establishment of the Western Palearctic variation of the barn owl (chapter 3): This study showed that all Western European barn owls descend from white barn owls phenotypes from the Middle East that colonized the Iberian Peninsula via North-Africa. Following the end of the last ice age (20'000 years ago), these white barn owls colonized Western Europe and under selection a novel rufous phenotype evolved (during or after the colonization). An important part of the color variation could be explained by a single mutation in the melanocortin-1-receptor (MC1R) gene that appeared during or after the colonization. The colonization of Europe reached until Greece, where the rufous birds encountered white ones (which reached Greece from the Middle East over the Bosporus) in a secondary contact zone. Our analyses show that white and rufous barn owls in Greece interbreed only to a limited extent. This suggests that barn owls are at the verge of becoming two species in Greece and demonstrates that European barn owls represent an incipient ring species around the Mediterranean. The revisited history of the establishment of the European barn owl color cline makes this model system remarkable for several aspects. It is a very clear example of strong local adaptation that can be achieved despite high gene flow (strong color and MC1R differentiation despite almost no neutral genetic differentiation). It also offers a wonderful model system to study the interactions between colonization processes and selection processes which have, for now, been remarkably understudied despite their potentially ubiquitous importance. Finally it represents a very interesting case in the speciation continuum and appeals for further studying the amount of gene flow that occurs between the color morphs in Greece. -- Sur l'échelle des temps géologiques, les conditions sur terre sont très variables et les patrons biologiques (telle que la distribution des espèces) sont très dynamiques. Si l'on veut comprendre des patrons que l'on peut observer à large échelle aujourd'hui, il est nécessaire de d'abord comprendre les facteurs historiques qui ont gouverné leur établissement. Dans cette thèse, nous allons réévaluer, grâce à des outils modernes de génétique des populations, l'évolution et la maintenance d'un cline de couleur continental observé chez l'effraie des clochers européenne (Tyto alba). Globalement, nos résultats montrent que le scenario accepté jusqu'à maintenant, qui stipule que le cline de couleur a évolué à partir du contact secondaire de deux morphes de couleur (blanches et rousses) ayant évolué en allopatrie durant les dernières glaciations, est à revoir. Afin de constituer le premier jeu de données de génétique des populations pour cette espèce, nous avons récolté des échantillons d'environ 700 effraies de l'ouest Paléarctique. Nous avons génotypé tous les individus à 22 loci microsatellites, sur un gène mitochondrial et sur un autre gène participant au déterminisme de la couleur. Nous avons aussi mesuré la couleur de tous les individus et déterminé leur sexe génétiquement. Nous avons tout d'abord pu montrer que la variation génétique neutre est négligeable en comparaison avec la variation héritable de couleur, qu'il n'existe qu'une seule lignée européenne et que de la sélection doit être impliquée dans le maintien du cline de couleur (chapitre 1). Grâce à des simulations informatiques, nous avons démontré que l'ensemble de l'Europe de l'ouest a été recolonisé depuis la Péninsule Ibérique après les dernières glaciations et que le cline de couleur ne peut pas avoir évolué par des processus neutre durant cette colonisation (chapitre 2). Finalement, nous avons réévalué l'ensemble de l'histoire postglaciaire de l'espèce dans l'ouest Paléarctique (chapitre 3): l'ensemble des effraies du Paléarctique descendent d'effraie claire du Moyen-Orient qui ont colonisé la péninsule ibérique en passant par l'Afrique du nord. Après la fin de la dernière glaciation (il y a 20'000 ans), ces effraies claires ont colonisé l'Europe de l'ouest et ont évolués par sélection le phénotype roux (durant ou après la colonisation). Une part importante de la variation de couleur peut être expliquée par une mutation sur le gène MC1R qui est apparue durant ou juste après la colonisation. Cette vague de colonisation s'est poursuivie jusqu'en Grèce où ces effraies rousses ont rencontré dans une zone de contact secondaire des effraies claires (qui sont remontées en Grèce depuis le Moyen-Orient via le Bosphore). Nos analyses montrent que le flux de gènes entre effraies blanches et rousses est limité en Grèce, ce qui suggère qu'elles sont en passe de former deux espèces et ce qui montre que les effraies constituent un exemple naissant de spéciation en anneaux autour de la Méditerranée. L'histoire revisitée des effraies des clochers de l'ouest Paléarctique en fait un système modèle remarquable pour plusieurs aspects. C'est un exemple très claire de forte adaptation locale maintenue malgré un fort flux de gènes (différenciation forte de couleur et sur le gène MC1R malgré presque aucune structure neutre). Il offre également un très bon système pour étudier l'interaction entre colonisation et sélection, un thème ayant été remarquablement peu étudié malgré son importance. Et il offre finalement un cas très intéressant dans le « continuum de spéciation » et il serait très intéressant d'étudier plus en détail l'importance du flux de gènes entre les morphes de couleur en Grèce.
Resumo:
Waddlia chondrophila, an obligate intracellular bacterium of the Chlamydiales order, is considered as an agent of bovine abortion and a likely cause of miscarriage in humans. Its role in respiratory diseases was questioned after the detection of its DNA in clinical samples taken from patients suffering from pneumonia or bronchiolitis. To better define the role of Waddlia in both miscarriage and pneumonia, a tool allowing large-scale serological investigations of Waddlia seropositivity is needed. Therefore, enriched outer membrane proteins of W. chondrophila were used as antigens to develop a specific ELISA. After thorough analytical optimization, the ELISA was validated by comparison with micro-immunofluorescence and it showed a sensitivity above 85% with 100% specificity. The ELISA was subsequently applied to human sera to specify the role of W. chondrophila in pneumonia. Overall, 3.6% of children showed antibody reactivity against W. chondrophila but no significant difference was observed between children with and without pneumonia. Proteomic analyses were then performed using mass spectrometry, highlighting members of the outer membrane protein family as the dominant proteins. The major Waddlia putative immunogenic proteins were identified by immunoblot using positive and negative human sera. The new ELISA represents an efficient tool with high throughput applications. Although no association with pneumonia and Waddlia seropositivity was observed, this ELISA could be used to specify the role of W. chondrophila in miscarriage and in other diseases.