86 resultados para Classical measurement error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: So far, none of the existing methods on Murray's law deal with the non-Newtonian behavior of blood flow although the non-Newtonian approach for blood flow modelling looks more accurate. MODELING: In the present paper, Murray's law which is applicable to an arterial bifurcation, is generalized to a non-Newtonian blood flow model (power-law model). When the vessel size reaches the capillary limitation, blood can be modeled using a non-Newtonian constitutive equation. It is assumed two different constraints in addition to the pumping power: the volume constraint or the surface constraint (related to the internal surface of the vessel). For a seek of generality, the relationships are given for an arbitrary number of daughter vessels. It is shown that for a cost function including the volume constraint, classical Murray's law remains valid (i.e. SigmaR(c) = cste with c = 3 is verified and is independent of n, the dimensionless index in the viscosity equation; R being the radius of the vessel). On the contrary, for a cost function including the surface constraint, different values of c may be calculated depending on the value of n. RESULTS: We find that c varies for blood from 2.42 to 3 depending on the constraint and the fluid properties. For the Newtonian model, the surface constraint leads to c = 2.5. The cost function (based on the surface constraint) can be related to entropy generation, by dividing it by the temperature. CONCLUSION: It is demonstrated that the entropy generated in all the daughter vessels is greater than the entropy generated in the parent vessel. Furthermore, it is shown that the difference of entropy generation between the parent and daughter vessels is smaller for a non-Newtonian fluid than for a Newtonian fluid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'expérience Belle, située dans le centre de recherche du KEK, au Japon, est consacrée principalement à l'étude de la violation de CP dans le système des mésons B. Elle est placée sur le collisionneur KEKB, qui produit des paires Banti-B. KEKB, l'une des deux « usines à B » actuellement en fonction, détient le record du nombre d'événements produits avec plus de 150 millions de paires. Cet échantillon permet des mesures d'une grande précision dans le domaine de la physique du méson B. C'est dans le cadre de ces mesures de précision que s'inscrit cette analyse. L'un des phénomènes remarquables de la physique des hautes énergies est la faculté qu'a l'interaction faible de coupler un méson neutre avec son anti-méson. Dans le présent travail, nous nous intéressons au méson B neutre couplé à l'anti-méson B neutre, avec une fréquence d'oscillation _md mesurable précisément. Outre la beauté de ce phénomène lui-même, une telle mesure trouve sa place dans la quête de l'origine de la violation de CP. Cette dernière n'est incluse que d'une façon peu satisfaisante dans le modèle standard des interactions électro-faibles. C'est donc la recherche de phénomènes physiques encore inexpliqués qui motive en premier lieu la collaboration Belle. Il existe déjà de nombreuses mesures de _md antérieures. Celle que nous présentons ici est cependant d'une précision encore jamais atteinte grâce, d'une part, à l'excellente performance de KEKB et, d'autre part, à une approche originale qui permet de réduire considérablement la contamination de la mesure par des événements indésirés. Cette approche fut déjà mise à profit par d'autres expériences, dans des conditions quelque peu différentes de celles de Belle. La méthode utilisée consiste à reconstruire partiellement l'un des mésons dans le canal ___D*(D0_)l_l, en n'utilisant que les informations relatives au lepton l et au pion _. L'information concernant l'autre méson de la paire Banti-B initiale n'est tirée que d'un seul lepton de haute énergie. Ainsi, l'échantillon à disposition ne souffre pas de grandes réductions dues à une reconstruction complète, tandis que la contamination due aux mésons B chargés, produits par KEKB en quantité égale aux B0, est fortement diminuée en comparaison d'une analyse inclusive. Nous obtenons finalement le résultat suivant : _md = 0.513±0.006±0.008 ps^-1, la première erreur étant l'erreur statistique et la deuxième, l'erreur systématique.<br/><br/>The Belle experiment is located in the KEK research centre (Japan) and is primarily devoted to the study of CP violation in the B meson sector. Belle is placed on the KEKB collider, one of the two currently running "B-meson factories", which produce Banti-B pairs. KEKB has created more than 150 million pairs in total, a world record for this kind of colliders. This large sample allows very precise measurements in the physics of beauty mesons. The present analysis falls within the framework of these precise measurements. One of the most remarkable phenomena in high-energy physics is the ability of weak interactions to couple a neutral meson to its anti-meson. In this work, we study the coupling of neutral B with neutral anti-B meson, which induces an oscillation of frequency _md we can measure accurately. Besides the interest of this phenomenon itself, this measurement plays an important role in the quest for the origin of CP violation. The standard model of electro-weak interactions does not include CP violation in a fully satisfactory way. The search for yet unexplained physical phenomena is, therefore, the main motivation of the Belle collaboration. Many measurements of _md have previously been performed. The present work, however, leads to a precision on _md that was never reached before. This is the result of the excellent performance of KEKB, and of an original approach that allows to considerably reduce background contamination of pertinent events. This approach was already successfully used by other collaborations, in slightly different conditions as here. The method we employed consists in the partial reconstruction of one of the B mesons through the decay channel ___D*(D0_)l_l, where only the information on the lepton l and the pion _ are used. The information on the other B meson of the initial Banti-B pair is extracted from a single high-energy lepton. The available sample of Banti-B pairs thus does not suffer from large reductions due to complete reconstruction, nor does it suffer of high charged B meson background, as in inclusive analyses. We finally obtain the following result: _md = 0.513±0.006±0.008 ps^-1, where the first error is statistical, and the second, systematical.<br/><br/>De quoi la matière est-elle constituée ? Comment tient-elle ensemble ? Ce sont là les questions auxquelles la recherche en physique des hautes énergies tente de répondre. Cette recherche est conduite à deux niveaux en constante interaction. D?une part, des modèles théoriques sont élaborés pour tenter de comprendre et de décrire les observations. Ces dernières, d?autre part, sont réalisées au moyen de collisions à haute énergie de particules élémentaires. C?est ainsi que l?on a pu mettre en évidence l?existence de quatre forces fondamentales et de 24 constituants élémentaires, classés en « quarks » et « leptons ». Il s?agit là de l?une des plus belles réussites du modèle en usage aujourd?hui, appelé « Modèle Standard ». Il est une observation fondamentale que le Modèle Standard peine cependant à expliquer, c?est la disparition quasi complète de l?anti-matière (le « négatif » de la matière). Au niveau fondamental, cela doit correspondre à une asymétrie entre particules (constituants de la matière) et antiparticules (constituants de l?anti-matière). On l?appelle l?asymétrie (ou violation) CP. Bien qu?incluse dans le Modèle Standard, cette asymétrie n?est que partiellement prise en compte, semble-t-il. En outre, son origine est inconnue. D?intenses recherches sont donc aujourd?hui entreprises pour mettre en lumière cette asymétrie. L?expérience Belle, au Japon, en est une des pionnières. Belle étudie en effet les phénomènes physiques liés à une famille de particules appelées les « mésons B », dont on sait qu?elles sont liées de près à l?asymétrie CP. C?est dans le cadre de cette recherche que se place cette thèse. Nous avons étudié une propriété remarquable du méson B neutre : l?oscillation de ce méson avec son anti-méson. Cette particule est de se désintégrer pour donner l?antiparticule associée. Il est clair que cette oscillation est rattachée à l?asymétrie CP. Nous avons ici déterminé avec une précision encore inégalée la fréquence de cette oscillation. La méthode utilisée consiste à caractériser une paire de mésons B à l?aide de leur désintégration comprenant un lepton chacun. Une plus grande précision est obtenue en recherchant également une particule appelée le pion, et qui provient de la désintégration d?un des mésons. Outre l?intérêt de ce phénomène oscillatoire en lui-même, cette mesure permet d?affiner, directement ou indirectement, le Modèle Standard. Elle pourra aussi, à terme, aider à élucider le mystère de l?asymétrie entre matière et anti-matière.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. METHODS AND MATERIALS: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. RESULTS: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. CONCLUSION: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. We elaborated a model that predicts the centiles of the 25(OH)D distribution taking into account seasonal variation. Methods. Data from two Swiss population-based studies were used to generate (CoLaus) and validate (Bus Santé) the model. Serum 25(OH)D was measured by ultra high pressure LC-MS/MS and immunoassay. Linear regression models on square-root transformed 25(OH)D values were used to predict centiles of the 25(OH)D distribution. Distribution functions of the observations from the replication set predicted with the model were inspected to assess replication. Results. Overall, 4,912 and 2,537 Caucasians were included in original and replication sets, respectively. Mean (SD) 25(OH)D, age, BMI, and % of men were 47.5 (22.1) nmol/L, 49.8 (8.5) years, 25.6 (4.1) kg/m(2), and 49.3% in the original study. The best model included gender, BMI, and sin-cos functions of measurement day. Sex- and BMI-specific 25(OH)D centile curves as a function of measurement date were generated. The model estimates any centile of the 25(OH)D distribution for given values of sex, BMI, and date and the quantile corresponding to a 25(OH)D measurement. Conclusions. We generated and validated centile curves of 25(OH)D in the general adult Caucasian population. These curves can help rank vitamin D centile independently of when 25(OH)D is measured.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The short version of the Oxford-Liverpool Inventory of Feelings and Experiences (sO-LIFE) is a widely used measure assessing schizotypy. There is limited information, however, on how sO-LIFE scores compare across different countries. The main goal of the present study is to test the measurement invariance of the sO-LIFE scores in a large sample of non-clinical adolescents and young adults from four European countries (UK, Switzerland, Italy, and Spain). The scores were obtained from validated versions of the sO-LIFE in their respective languages. The sample comprised 4190 participants (M = 20.87 years; SD = 3.71 years). The study of the internal structure, using confirmatory factor analysis, revealed that both three (i.e., positive schizotypy, cognitive disorganisation, and introvertive anhedonia) and four-factor (i.e., positive schizotypy, cognitive disorganisation, introvertive anhedonia, and impulsive nonconformity) models fitted the data moderately well. Multi-group confirmatory factor analysis showed that the three-factor model had partial strong measurement invariance across countries. Eight items were non-invariant across samples. Significant statistical differences in the mean scores of the s-OLIFE were found by country. Reliability scores, estimated with Ordinal alpha ranged from 0.75 to 0.87. Using the Item Response Theory framework, the sO-LIFE provides more accuracy information at the medium and high end of the latent trait. The current results show further evidence in support of the psychometric proprieties of the sO-LIFE, provide new information about the cross-cultural equivalence of schizotypy and support the use of this measure to screen for psychotic-like features and liability to psychosis in general population samples from different European countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past 10 years, mini-host models and in particular the greater wax moth Galleria mellonella have tended to become a surrogate for murine models of fungal infection mainly due to cost, ethical constraints and ease of use. Thus, methods to better assess the fungal pathogenesis in G. mellonella need to be developed. In this study, we implemented the detection of Candida albicans cells expressing the Gaussia princeps luciferase in its cell wall in infected larvae of G. mellonella. We demonstrated that detection and quantification of luminescence in the pulp of infected larvae is a reliable method to perform drug efficacy and C. albicans virulence assays as compared to fungal burden assay. Since the linearity of the bioluminescent signal, as compared to the CFU counts, has a correlation of R(2) = 0.62 and that this method is twice faster and less labor intensive than classical fungal burden assays, it could be applied to large scale studies. We next visualized and followed C. albicans infection in living G. mellonella larvae using a non-toxic and water-soluble coelenterazine formulation and a CCD camera that is commonly used for chemoluminescence signal detection. This work allowed us to follow for the first time C. albicans course of infection in G. mellonella during 4 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. METHODS: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m(2) [underweight], 18·5 kg/m(2) to <20 kg/m(2), 20 kg/m(2) to <25 kg/m(2), 25 kg/m(2) to <30 kg/m(2), 30 kg/m(2) to <35 kg/m(2), 35 kg/m(2) to <40 kg/m(2), ≥40 kg/m(2) [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. FINDINGS: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m(2) (95% credible interval 21·3-22·1) in 1975 to 24·2 kg/m(2) (24·0-24·4) in 2014 in men, and from 22·1 kg/m(2) (21·7-22·5) in 1975 to 24·4 kg/m(2) (24·2-24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m(2) in central Africa and south Asia to 29·2 kg/m(2) (28·6-29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m(2) (21·4-22·3) in south Asia to 32·2 kg/m(2) (31·5-32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5-17·4) to 8·8% (7·4-10·3) in men and from 14·6% (11·6-17·9) to 9·7% (8·3-11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8-29·2) in men and 24·0% (18·9-29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4-4·1) in 1975 to 10·8% (9·7-12·0) in 2014 in men, and from 6·4% (5·1-7·8) to 14·9% (13·6-16·1) in women. 2·3% (2·0-2·7) of the world's men and 5·0% (4·4-5·6) of women were severely obese (ie, have BMI ≥35 kg/m(2)). Globally, prevalence of morbid obesity was 0·64% (0·46-0·86) in men and 1·6% (1·3-1·9) in women. INTERPRETATION: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia. FUNDING: Wellcome Trust, Grand Challenges Canada.