908 resultados para Belief-Based Targets
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
The James Webb Space Telescope (JWST) will likely revolutionize transiting exoplanet atmospheric science, due to a combination of its capability for continuous, long duration observations and its larger collecting area, spectral coverage, and spectral resolution compared to existing space-based facilities. However, it is unclear precisely how well JWST will perform and which of its myriad instruments and observing modes will be best suited for transiting exoplanet studies. In this article, we describe a prefatory JWST Early Release Science (ERS) Cycle 1 program that focuses on testing specific observing modes to quickly give the community the data and experience it needs to plan more efficient and successful transiting exoplanet characterization programs in later cycles. We propose a multi-pronged approach wherein one aspect of the program focuses on observing transits of a single target with all of the recommended observing modes to identify and understand potential systematics, compare transmission spectra at overlapping and neighboring wavelength regions, confirm throughputs, and determine overall performances. In our search for transiting exoplanets that are well suited to achieving these goals, we identify 12 objects (dubbed “community targets”) that meet our defined criteria. Currently, the most favorable target is WASP-62b because of its large predicted signal size, relatively bright host star, and location in JWST's continuous viewing zone. Since most of the community targets do not have well-characterized atmospheres, we recommend initiating preparatory observing programs to determine the presence of obscuring clouds/hazes within their atmospheres. Measurable spectroscopic features are needed to establish the optimal resolution and wavelength regions for exoplanet characterization. Other initiatives from our proposed ERS program include testing the instrument brightness limits and performing phase-curve observations. The latter are a unique challenge compared to transit observations because of their significantly longer durations. Using only a single mode, we propose to observe a full-orbit phase curve of one of the previously characterized, short-orbital-period planets to evaluate the facility-level aspects of long, uninterrupted time-series observations.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
Introduction For a long time, language learning research focusing on young learners was a neglected field of research. Most empirical studies within the broad area of second/foreign language acquisition were instead carried out among adults in tertiary education and it was not until in the 1990s that the scope of research broadened to include also young learners, then loosely defined as children in primary and/or secondary education (see, for example, Hasselgreen & Drew, 2012; McKay, 2006; Nikolov, 2009a). In fact, some agreement upon how to define ‘young learners’ was not properly discussed until in 2013, when Gail Ellis (2013) provided some useful clarifications as regards how to label learners within the broad age-span that encompasses both primary and secondary school. In short, based on a literature overview, she concludes that the term young learners is most often used for children between the ages of five and eleven/twelve, which in most countries would be equivalent to learners in primary school. Thus, since young learners did not catch much scholarly attention until fairly recently, research volumes on the topic have been scarce. However, with a rapidly growing interest in examining how small children learn foreign languages, there has been a sudden increase in terms of the number of books available targeting young language learners. A first, major contribution was Nikolov’s (2009b) Early learning of modern foreign languages, in which 16 studies of young language learners from different countries are accounted for. Another important contribution is the edited book that will be reviewed here, which specifically targets studies about various aspects of second/foreign language learning among young (mainly Norwegian) learners. Bearing in mind that Norway and Sweden are very similar countries in terms of schooling, language background, and demographics – only to give three examples of similarities between these two nations – it is particularly relevant for Swedish scholars within the fields of education and second language acquisition to become familiar with research findings from the neighboring country. In this review, the editors and the outline of the book are first described, then brief summaries of each chapter are provided, before the text closes with an evaluation of the volume.
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface to find new hotspots, where ligands might potentially interact with, and which is implemented in massively parallel Graphics Processing Units, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to solve this problem, we propose a novel approach where neural networks are trained with databases of known active (drugs) and inactive compounds, and later used to improve VS predictions.
Resumo:
Perspective taking is a crucial ability that guides our social interactions. In this study, we show how the specific patterns of errors of brain-damaged patients in perspective taking tasks can help us further understand the factors contributing to perspective taking abilities. Previous work (e.g., Samson, Apperly, Chiavarino, & Humphreys, 2004; Samson, Apperly, Kathirgamanathan, & Humphreys, 2005) distinguished two components of perspective taking: the ability to inhibit our own perspective and the ability to infer someone else’s perspective. We assessed these components using a new nonverbal false belief task which provided different response options to detect three types of response strategies that participants might be using: a complete and spared belief reasoning strategy, a reality-based response selection strategy in which participants respond from their own perspective, and a simplified mentalising strategy in which participants avoid responding from their own perspective but rely on inaccurate cues to infer the other person’s belief. One patient, with a self-perspective inhibition deficit, almost always used the reality-based response strategy; in contrast, the other patient, with a deficit in taking other perspectives, tended to use the simplified mentalising strategy without necessarily transposing her own perspective. We discuss the extent to which the pattern of performance of both patients could relate to their executive function deficit and how it can inform us on the cognitive and neural components involved in belief reasoning.
Resumo:
Motivation: Influenza A viral heterogeneity remains a significant threat due to unpredictable antigenic drift in seasonal influenza and antigenic shifts caused by the emergence of novel subtypes. Annual review of multivalent influenza vaccines targets strains of influenza A and B likely to be predominant in future influenza seasons. This does not induce broad, cross protective immunity against emergent subtypes. Better strategies are needed to prevent future pandemics. Cross-protection can be achieved by activating CD8+ and CD4+ T cells against highly-conserved regions of the influenza genome. We combine available experimental data with informatics-based immunological predictions to help design vaccines potentially able to induce cross-protective T-cells against multiple influenza subtypes. Results: To exemplify our approach we designed two epitope ensemble vaccines comprising highlyconserved and experimentally-verified immunogenic influenza A epitopes as putative non-seasonal influenza vaccines; one specifically targets the US population and the other is a universal vaccine. The USA-specific vaccine comprised 6 CD8+ T cell epitopes (GILGFVFTL, FMYSDFHFI, GMDPRMCSL, SVKEKDMTK, FYIQMCTEL, DTVNRTHQY) and 3 CD4+ epitopes (KGILGFVFTLTVPSE, EYIMKGVYINTALLN, ILGFVFTLTVPSERG). The universal vaccine comprised 8 CD8+ epitopes: (FMYSDFHFI, GILGFVFTL, ILRGSVAHK, FYIQMCTEL, ILKGKFQTA, YYLEKANKI, VSDGGPNLY, YSHGTGTGY) and the same 3 CD4+ epitopes. Our USA-specific vaccine has a population protection coverage (portion of the population potentially responsive to one or more component epitopes of the vaccine, PPC) of over 96% and 95% coverage of observed influenza subtypes. The universal vaccine has a PPC value of over 97% and 88% coverage of observed subtypes.
Resumo:
Ligand-protein docking is an optimization problem based on predicting the position of a ligand with the lowest binding energy in the active site of the receptor. Molecular docking problems are traditionally tackled with single-objective, as well as with multi-objective approaches, to minimize the binding energy. In this paper, we propose a novel multi-objective formulation that considers: the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands and the binding (intermolecular) energy, as two objectives to evaluate the quality of the ligand-protein interactions. To determine the kind of Pareto front approximations that can be obtained, we have selected a set of representative multi-objective algorithms such as NSGA-II, SMPSO, GDE3, and MOEA/D. Their performances have been assessed by applying two main quality indicators intended to measure convergence and diversity of the fronts. In addition, a comparison with LGA, a reference single-objective evolutionary algorithm for molecular docking (AutoDock) is carried out. In general, SMPSO shows the best overall results in terms of energy and RMSD (value lower than 2A for successful docking results). This new multi-objective approach shows an improvement over the ligand-protein docking predictions that could be promising in in silico docking studies to select new anticancer compounds for therapeutic targets that are multidrug resistant.
Resumo:
Dissertação de Mestrado, Finanças Empresariais, Faculdade de Economia, Universidade do Algarve, 2014
Resumo:
The overall purpose of this collected papers dissertation was to examine the utility of a cognitive apprenticeship-based instructional coaching (CAIC) model for improving the science teaching efficacy beliefs (STEB) of preservice and inservice elementary teachers. Many of these teachers perceive science as a difficult subject and feel inadequately prepared to teach it. However, teacher efficacy beliefs have been noted as the strongest indicator of teacher quality, the variable most highly correlated with student achievement outcomes. The literature is scarce on strong, evidence-based theoretical models for improving STEB.^ This dissertation is comprised of two studies. STUDY #1 was a sequential explanatory mixed-methods study investigating the impact of a reformed CAIC elementary science methods course on the STEB of 26 preservice teachers. Data were collected using the Science Teaching Efficacy Belief Instrument (STEBI-B) and from six post-course interviews. A statistically significant increase in STEB was observed in the quantitative strand. The qualitative data suggested that the preservice teachers perceived all of the CAIC methods as influential, but the significance of each method depended on their unique needs and abilities. ^ STUDY #2 was a participatory action research case study exploring the utility of a CAIC professional development program for improving the STEB of five Bahamian inservice teachers and their competency in implementing an inquiry-based curriculum. Data were collected from pre- and post-interviews and two focus group interviews. Overall, the inservice teachers perceived the intervention as highly effective. The scaffolding and coaching were the CAIC methods portrayed as most influential in developing their STEB, highlighting the importance of interpersonal relationship aspects in successful instructional coaching programs. The teachers also described the CAIC approach as integral in supporting their learning to implement the new inquiry-based curriculum. ^ The overall findings hold important implications for science education reform, including its potential to influence how preservice teacher training and inservice teacher professional development in science are perceived and implemented. Additionally, given the noteworthy results obtained over the relatively short durations, CAIC interventions may also provide an effective means of achieving improvements in preservice and inservice teachers’ STEB more expeditiously than traditional approaches.^
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
The fibroblast growth factor (FGF) family consists of 22 evolutionarily and structurally related proteins (FGF1 to FGF23; with FGF15 being the rodent ortholog of human FGF19). Based on their mechanism of action, FGFs can be categorized into intracrine, autocrine/paracrine and endocrine subgroups. Both autocrine/paracrine and endocrine FGFs are secreted from their cells of origin and exert their effects on target cells by binding to and activating specific single-pass transmembrane tyrosine kinase receptors (FGFRs). Moreover, FGF binding to FGFRs requires specific cofactors, namely heparin/heparan sulfate proteoglycans or Klothos for autocrine/paracrine and endocrine FGF signaling, respectively. FGFs are vital for embryonic development and mediate a broad spectrum of biological functions, ranging from cellular excitability to angiogenesis and tissue regeneration. Over the past decade certain FGFs (e.g. FGF1, FGF10, FGF15/FGF19 and FGF21) have been further recognized as regulators of energy homeostasis, metabolism and adipogenesis, constituting novel therapeutic targets for obesity and obesity-related cardiometabolic disease. Until recently, translational research has been mainly focused on FGF21, due to the pleiotropic, beneficial metabolic actions and the relatively benign safety profile of its engineered variants. However, increasing evidence regarding the role of additional FGFs in the regulation of metabolic homeostasis and recent developments regarding novel, engineered FGF variants have revitalized the research interest into the therapeutic potential of certain additional FGFs (e.g. FGF1 and FGF15/FGF19). This review presents a brief overview of the FGF family, describing the mode of action of the different FGFs subgroups, and focuses on FGF1 and FGF15/FGF19, which appear to also represent promising new targets for the treatment of obesity and type 2 diabetes.
Resumo:
Background: One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. Methods: We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence - defined as fasting plasma glucose of 7·0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs - in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. Findings: We used data from 751 studies including 4 372 000 adults from 146 of the 200 countries we make estimates for Global age-standardised diabetes prevalence increased from 4·3% (95% credible interval 2·4-7·0) in 1980 to 9·0% (7·2-11·1) in 2014 in men, and from 5·0% (2·9-7·9) to 7·9% (6·4-9·7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28·5% due to the rise in prevalence, 39·7% due to population growth and ageing, and 31·8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Interpretation Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults aff ected, has increased faster in low-income and middle-income countries than in high-income countries.