39 resultados para analytic
Resumo:
Adaptive dynamics shows that a continuous trait under frequency dependent selection may first converge to a singular point followed by spontaneous transition from a unimodal trait distribution into a bimodal one, which is called "evolutionary branching". Here, we study evolutionary branching in a deme-structured population by constructing a quantitative genetic model for the trait variance dynamics, which allows us to obtain an analytic condition for evolutionary branching. This is first shown to agree with previous conditions for branching expressed in terms of relatedness between interacting individuals within demes and obtained from mutant-resident systems. We then show this branching condition can be markedly simplified when the evolving trait affect fecundity and/or survival, as opposed to affecting population structure, which would occur in the case of the evolution of dispersal. As an application of our model, we evaluate the threshold migration rate below which evolutionary branching cannot occur in a pairwise interaction game. This agrees very well with the individual-based simulation results.
Resumo:
OBJECTIVES: To investigate whether associations of smoking with depression and anxiety are likely to be causal, using a Mendelian randomisation approach. DESIGN: Mendelian randomisation meta-analyses using a genetic variant (rs16969968/rs1051730) as a proxy for smoking heaviness, and observational meta-analyses of the associations of smoking status and smoking heaviness with depression, anxiety and psychological distress. PARTICIPANTS: Current, former and never smokers of European ancestry aged ≥16 years from 25 studies in the Consortium for Causal Analysis Research in Tobacco and Alcohol (CARTA). PRIMARY OUTCOME MEASURES: Binary definitions of depression, anxiety and psychological distress assessed by clinical interview, symptom scales or self-reported recall of clinician diagnosis. RESULTS: The analytic sample included up to 58 176 never smokers, 37 428 former smokers and 32 028 current smokers (total N=127 632). In observational analyses, current smokers had 1.85 times greater odds of depression (95% CI 1.65 to 2.07), 1.71 times greater odds of anxiety (95% CI 1.54 to 1.90) and 1.69 times greater odds of psychological distress (95% CI 1.56 to 1.83) than never smokers. Former smokers also had greater odds of depression, anxiety and psychological distress than never smokers. There was evidence for positive associations of smoking heaviness with depression, anxiety and psychological distress (ORs per cigarette per day: 1.03 (95% CI 1.02 to 1.04), 1.03 (95% CI 1.02 to 1.04) and 1.02 (95% CI 1.02 to 1.03) respectively). In Mendelian randomisation analyses, there was no strong evidence that the minor allele of rs16969968/rs1051730 was associated with depression (OR=1.00, 95% CI 0.95 to 1.05), anxiety (OR=1.02, 95% CI 0.97 to 1.07) or psychological distress (OR=1.02, 95% CI 0.98 to 1.06) in current smokers. Results were similar for former smokers. CONCLUSIONS: Findings from Mendelian randomisation analyses do not support a causal role of smoking heaviness in the development of depression and anxiety.
Resumo:
Aim: Conduct a search and analytic review of literature regarding attributes of Advance Care Planning (ACP) and Advance Directive in order to identify the experiences and the best care strategies for older adults resident in nursing homes or long term institutions. Methodology: An extensive electronic search was undertaken in the following databases: Pubmed (via Ovid search), Cumulative Index of Nursing and Allied Health (CINAHL, via EBHOST), psychINFO and Cochrane. After analyzing and eliminating duplicates and professional's point of view (19), 144 titles were considered relevant: 28 opinion papers, 94 descriptive/qualitative studies or predictive studies, 17 experimental and five systematic reviews. Most of them were produced in North America and only 10 were in French. Results: With regard to European experiences, studies are scarce and further research could benefit from North American evidence. Contrary to Europe, nurses in North America play a major role in the process of care planning. The major findings were related to the poor efficacy of the completion of Advance Directives, even in presence of a substantial variety of implementation strategies. The evidence supports interventions that conceptualize ACP as a process, with an emphasis on the ascertainment of patients' values and beliefs and the necessity to include the family or loved ones from the beginning of the process in order to favor the expression and sharing of one's life perspectives and priorities in care. The most relevant findings were associated with the conceptualization of the ACP as a change in health behaviors which needs an involvement in different stages to overcome a variety of barriers. Conclusion: Rigorous research in ACP for the older adults in Swiss nursing homes that promote respect and dignity in this frail population is needed. How to best achieve patients and families goals should be the focus of nursing intervention and research in this domain.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
OBJECTIVE: To describe the determinants of self-initiated smoking cessation of duration of at least 6 months as identified in longitudinal population-based studies of adolescent and young adult smokers. METHODS: A systematic search of the PubMed and EMBASE databases using smoking, tobacco, cessation, quit and stop as keywords was performed. Limits included articles related to humans, in English, published between January 1984 and August 2010, and study population aged 10-29 years. A total of 4502 titles and 871 abstracts were reviewed independently by 2 and 3 reviewers, respectively. Nine articles were retained for data abstraction. Data on study location, timeframe, duration of follow-up, number of data collection points, sample size, age/grade of participants, number of quitters, smoking status at baseline, definition of cessation, covariates and analytic method were abstracted from each article. The number of studies that reported a statistically significant association between each determinant investigated and cessation were tabulated, from among all studies that assessed the determinant. RESULTS: Despite heterogeneity in methods across studies, five factors robustly predicted quitting across studies in which the factor was investigated: not having friends who smoke, not having intentions to smoke in the future, resisting peer pressure to smoke, being older at first use of cigarette and having negative beliefs about smoking. CONCLUSIONS: The literature on longitudinal predictors of cessation in adolescent and young adult smokers is not well developed. Cessation interventions for this population will remain less than optimally effective until there is a solid evidence base on which to develop interventions.
Resumo:
A fundamental tenet of neuroscience is that cortical functional differentiation is related to the cross-areal differences in cyto-, receptor-, and myeloarchitectonics that are observed in ex-vivo preparations. An ongoing challenge is to create noninvasive magnetic resonance (MR) imaging techniques that offer sufficient resolution, tissue contrast, accuracy and precision to allow for characterization of cortical architecture over an entire living human brain. One exciting development is the advent of fast, high-resolution quantitative mapping of basic MR parameters that reflect cortical myeloarchitecture. Here, we outline some of the theoretical and technical advances underlying this technique, particularly in terms of measuring and correcting for transmit and receive radio frequency field inhomogeneities. We also discuss new directions in analytic techniques, including higher resolution reconstructions of the cortical surface. We then discuss two recent applications of this technique. The first compares individual and group myelin maps to functional retinotopic maps in the same individuals, demonstrating a close relationship between functionally and myeloarchitectonically defined areal boundaries (as well as revealing an interesting disparity in a highly studied visual area). The second combines tonotopic and myeloarchitectonic mapping to localize primary auditory areas in individual healthy adults, using a similar strategy as combined electrophysiological and post-mortem myeloarchitectonic studies in non-human primates.
Resumo:
BACKGROUND: Aromatase inhibitors provide superior disease control when compared with tamoxifen as adjuvant therapy for postmenopausal women with endocrine-responsive early breast cancer. PURPOSE: To present the design, history, and analytic challenges of the Breast International Group (BIG) 1-98 trial: an international, multicenter, randomized, double-blind, phase-III study comparing the aromatase inhibitor letrozole with tamoxifen in this clinical setting. METHODS: From 1998-2003, BIG 1-98 enrolled 8028 women to receive monotherapy with either tamoxifen or letrozole for 5 years, or sequential therapy of 2 years of one agent followed by 3 years of the other. Randomization to one of four treatment groups permitted two complementary analyses to be conducted several years apart. The first, reported in 2005, provided a head-to-head comparison of letrozole versus tamoxifen. Statistical power was increased by an enriched design, which included patients who were assigned sequential treatments until the time of the treatment switch. The second, reported in late 2008, used a conditional landmark approach to test the hypothesis that switching endocrine agents at approximately 2 years from randomization for patients who are disease-free is superior to continuing with the original agent. RESULTS: The 2005 analysis showed the superiority of letrozole compared with tamoxifen. The patients who were assigned tamoxifen alone were unblinded and offered the opportunity to switch to letrozole. Results from other trials increased the clinical relevance about whether or not to start treatment with letrozole or tamoxifen, and analysis plans were expanded to evaluate sequential versus single-agent strategies from randomization. LIMITATIONS: Due to the unblinding of patients assigned tamoxifen alone, analysis of updated data will require ascertainment of the influence of selective crossover from tamoxifen to letrozole. CONCLUSIONS: BIG 1-98 is an example of an enriched design, involving complementary analyses addressing different questions several years apart, and subject to evolving analytic plans influenced by new data that emerge over time.
Resumo:
Aujourd'hui plus que jamais, le développement économique ne peut se faire au détriment de notre environnement naturel. Dès lors se pose la question de savoir comment parvenir à maintenir la compétitivité d'une économie, tout en tenant compte de l'impact qu'elle a sur révolution du cadre naturel. La présente recherche propose d'investiguer sur la question en se penchant sur les politiques publiques de promotion économique, et plus spécifiquement sur la politique régionale. Faut-il maintenir la confiance dans les courants néoclassiques, comme le laisse supposer la situation actuelle, renforcer la position d'une économie s'inscrivant au sein d'un cadre socio-environnemental ou encore repenser notre mode de fonctionnement économique quant à son développement ? Dans le cas présent, la politique régionale suisse est évaluée à la lumière de trois stratégies de développement économique. D'une part, il y a l'économie fondée sur la connaissance. Cette dernière est à la base de la philosophie actuelle en matière de politique régionale. Ensuite, il y a l'écologie industrielle, qui pour sa part fait la promesse d'un développement économique éco-compatible. Enfin, la troisième stratégie est celle de l'économie de fonctionnalité, qui propose de maximiser l'efficience d'une unité de matière première en limitant notamment la notion de propriété. Au travers d'une grille d'analyse construite sur le modèle des géographies de la grandeur, les trois stratégies sont confrontées aux objectifs de la nouvelle politique régionale suisse (NPR) ainsi qu'à ses modalités de mise en oeuvre. Il en ressort qu'en l'état actuel, la stratégie misant sur l'économie de la connaissance est la plus à même de relever le défi d'un développement économique durable. Toutefois, moyennant adaptations, les autres stratégies pourraient également se révéler être pertinentes. On constate notamment que les éléments clés sont ici l'innovation, ainsi que les dimensions spatiale et temporelle des stratégies. Nous recommandons dès lors d'adopter une approche territorialisée du développement économique, selon une logique de projet au sens de Boltanski & Chiapello. A notre sens, seules les propositions à même de fédérer les acteurs et disposant d'une vision intégrée du développement ont une chance de permettre un développement économique en harmonie avec notre cadre environnemental. - Today more than ever, economic development can't go ahead without consideration for our natural environment. This lays us with the question of how to maintain the competitiveness of an economy, and at the same time to manage the impact of it on the natural frame. This research aims to investigate this question through public policies fostering economy, more specifically through the regional policy. Must one trust the neo-classical way, as the actual situation let's think about it, reinforce the position of an economy within a socio- environmental frame or moreover reinvent our economical modus regarding development? In this case, an assessment of the Swiss regional policy is lead through three strategies of economic development. First, there is the knowledge economy. It is the core concept of the actual philosophy regarding regional policy. Second, there is the industrial ecology which aims to promote an eco-compatible economic development. Last, there is the functional economy, which proposes to maximize the efficiency of every used unit of natural resources by limiting in particular the notion of propriety. Through an analytic frame built on the model of geographies of greatness (géographies de la grandeuή, the three strategies are confronted to the objectives of the Swiss new regional policy (NRP) as well as its implementation. It turns out that actually, the strategy laying on the knowledge economy happens to be the best solution in order to promote a sustainable economic development. Nevertheless, with few adaptations, the other strategies could be pregnant as well. What we can see is that key- elements are here the innovation, as well as the spatial and temporal dimensions of these strategies. We recommend therefore to adopt a territorialised approach of economic development, with a project-based logic as meant by Boltanski & Chiapello. We are convinced that only propositions which are able to unit actors, and with an integrated point of view, have a chance to promote economic development in harmony with our environmental frame.
Resumo:
The topic of cardiorespiratory interactions is of extreme importance to the practicing intensivist. It also has a reputation for being intellectually challenging, due in part to the enormous volume of relevant, at times contradictory literature. Another source of difficulty is the need to simultaneously consider the interrelated functioning of several organ systems (not necessarily limited to the heart and lung), in other words, to adopt a systemic (as opposed to analytic) point of view. We believe that the proper understanding of a few simple physiological concepts is of great help in organizing knowledge in this field. The first part of this review will be devoted to demonstrating this point. The second part, to be published in a coming issue of Intensive Care Medicine, will apply these concepts to clinical situations. We hope that this text will be of some use, especially to intensivists in training, to demystify a field that many find intimidating.
Resumo:
1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum. This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudo-absence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.
Resumo:
OBJECTIVE: We examined the analytic validity of reported family history of hypertension and diabetes among siblings in the Seychelles. STUDY DESIGN AND SETTING: Four hundred four siblings from 73 families with at least two hypertensive persons were identified through a national hypertension register. Two gold standards were used prospectively. Sensitivity was the proportion of respondents who indicated the presence of disease in a sibling, given that the sibling reported to be affected (personal history gold standard) or was clinically affected (clinical status gold standard). Specificity was the proportion of respondents who reported an unaffected sibling, given that the sibling reported to be unaffected or was clinically unaffected. Respondents gave information on the disease status in their siblings in approximately two-thirds of instances. RESULTS: When sibling history could be obtained (n=348 for hypertension, n=404 for diabetes), the sensitivity and the specificity of the sibling history were, respectively, 90 and 55% for hypertension, and 61 and 98% for diabetes, using clinical status and, respectively, 89 and 78% for hypertension, and 53 and 98% for diabetes, using personal history. CONCLUSION: The sibling history, when available, is a useful screening test to detect hypertension, but it is less useful to detect diabetes.
Resumo:
Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.
Resumo:
MI-based interventions are widely used with a number of different clinical populations and their efficacy has been well established. However, the clinicians' training has not traditionally been the focus of empirical investigations. We conducted a meta-analytic review of clinicians' MI-training and MI-skills findings. Fifteen studies were included, involving 715 clinicians. Pre-post training effect sizes were calculated (13 studies) as well as group contrast effect sizes (7 studies). Pre-post training comparisons showed medium to large ES of MI training, which are maintained over a short period of time. When compared to a control group, our results also suggested higher MI proficiency in the professionals trained in MI than in nontrained ones (medium ES). However, this estimate of ES may be affected by a publication bias and therefore, should be considered with caution. Methodological limitations and potential sources of heterogeneity of the studies included in this meta-analysis are discussed.
Resumo:
Depuis le séminaire H. Cartan de 1954-55, il est bien connu que l'on peut trouver des éléments de torsion arbitrairement grande dans l'homologie entière des espaces d'Eilenberg-MacLane K(G,n) où G est un groupe abélien non trivial et n>1. L'objectif majeur de ce travail est d'étendre ce résultat à des H-espaces possédant plus d'un groupe d'homotopie non trivial. Dans le but de contrôler précisément le résultat de H. Cartan, on commence par étudier la dualité entre l'homologie et la cohomologie des espaces d'Eilenberg-MacLane 2-locaux de type fini. On parvient ainsi à raffiner quelques résultats qui découlent des calculs de H. Cartan. Le résultat principal de ce travail peut être formulé comme suit. Soit X un H-espace ne possédant que deux groupes d'homotopie non triviaux, tous deux finis et de 2-torsion. Alors X n'admet pas d'exposant pour son groupe gradué d'homologie entière réduite. On construit une large classe d'espaces pour laquelle ce résultat n'est qu'une conséquence d'une caractéristique topologique, à savoir l'existence d'un rétract faible X K(G,n) pour un certain groupe abélien G et n>1. On généralise également notre résultat principal à des espaces plus compliqués en utilisant la suite spectrale d'Eilenberg-Moore ainsi que des méthodes analytiques faisant apparaître les nombres de Betti et leur comportement asymptotique. Finalement, on conjecture que les espaces qui ne possédent qu'un nombre fini de groupes d'homotopie non triviaux n'admettent pas d'exposant homologique. Ce travail contient par ailleurs la présentation de la « machine d'Eilenberg-MacLane », un programme C++ conçu pour calculer explicitement les groupes d'homologie entière des espaces d'Eilenberg-MacLane. <br/><br/>By the work of H. Cartan, it is well known that one can find elements of arbitrarilly high torsion in the integral (co)homology groups of an Eilenberg-MacLane space K(G,n), where G is a non-trivial abelian group and n>1. The main goal of this work is to extend this result to H-spaces having more than one non-trivial homotopy groups. In order to have an accurate hold on H. Cartan's result, we start by studying the duality between homology and cohomology of 2-local Eilenberg-MacLane spaces of finite type. This leads us to some improvements of H. Cartan's methods in this particular case. Our main result can be stated as follows. Let X be an H-space with two non-vanishing finite 2-torsion homotopy groups. Then X does not admit any exponent for its reduced integral graded (co)homology group. We construct a wide class of examples for which this result is a simple consequence of a topological feature, namely the existence of a weak retract X K(G,n) for some abelian group G and n>1. We also generalize our main result to more complicated stable two stage Postnikov systems, using the Eilenberg-Moore spectral sequence and analytic methods involving Betti numbers and their asymptotic behaviour. Finally, we investigate some guesses on the non-existence of homology exponents for finite Postnikov towers. We conjecture that Postnikov pieces do not admit any (co)homology exponent. This work also includes the presentation of the "Eilenberg-MacLane machine", a C++ program designed to compute explicitely all integral homology groups of Eilenberg-MacLane spaces. <br/><br/>Il est toujours difficile pour un mathématicien de parler de son travail. La difficulté réside dans le fait que les objets qu'il étudie sont abstraits. On rencontre assez rarement un espace vectoriel, une catégorie abélienne ou une transformée de Laplace au coin de la rue ! Cependant, même si les objets mathématiques sont difficiles à cerner pour un non-mathématicien, les méthodes pour les étudier sont essentiellement les mêmes que celles utilisées dans les autres disciplines scientifiques. On décortique les objets complexes en composantes plus simples à étudier. On dresse la liste des propriétés des objets mathématiques, puis on les classe en formant des familles d'objets partageant un caractère commun. On cherche des façons différentes, mais équivalentes, de formuler un problème. Etc. Mon travail concerne le domaine mathématique de la topologie algébrique. Le but ultime de cette discipline est de parvenir à classifier tous les espaces topologiques en faisant usage de l'algèbre. Cette activité est comparable à celle d'un ornithologue (topologue) qui étudierait les oiseaux (les espaces topologiques) par exemple à l'aide de jumelles (l'algèbre). S'il voit un oiseau de petite taille, arboricole, chanteur et bâtisseur de nids, pourvu de pattes à quatre doigts, dont trois en avant et un, muni d'une forte griffe, en arrière, alors il en déduira à coup sûr que c'est un passereau. Il lui restera encore à déterminer si c'est un moineau, un merle ou un rossignol. Considérons ci-dessous quelques exemples d'espaces topologiques: a) un cube creux, b) une sphère et c) un tore creux (c.-à-d. une chambre à air). a) b) c) Si toute personne normalement constituée perçoit ici trois figures différentes, le topologue, lui, n'en voit que deux ! De son point de vue, le cube et la sphère ne sont pas différents puisque ils sont homéomorphes: on peut transformer l'un en l'autre de façon continue (il suffirait de souffler dans le cube pour obtenir la sphère). Par contre, la sphère et le tore ne sont pas homéomorphes: triturez la sphère de toutes les façons (sans la déchirer), jamais vous n'obtiendrez le tore. Il existe un infinité d'espaces topologiques et, contrairement à ce que l'on serait naïvement tenté de croire, déterminer si deux d'entre eux sont homéomorphes est très difficile en général. Pour essayer de résoudre ce problème, les topologues ont eu l'idée de faire intervenir l'algèbre dans leurs raisonnements. Ce fut la naissance de la théorie de l'homotopie. Il s'agit, suivant une recette bien particulière, d'associer à tout espace topologique une infinité de ce que les algébristes appellent des groupes. Les groupes ainsi obtenus sont appelés groupes d'homotopie de l'espace topologique. Les mathématiciens ont commencé par montrer que deux espaces topologiques qui sont homéomorphes (par exemple le cube et la sphère) ont les même groupes d'homotopie. On parle alors d'invariants (les groupes d'homotopie sont bien invariants relativement à des espaces topologiques qui sont homéomorphes). Par conséquent, deux espaces topologiques qui n'ont pas les mêmes groupes d'homotopie ne peuvent en aucun cas être homéomorphes. C'est là un excellent moyen de classer les espaces topologiques (pensez à l'ornithologue qui observe les pattes des oiseaux pour déterminer s'il a affaire à un passereau ou non). Mon travail porte sur les espaces topologiques qui n'ont qu'un nombre fini de groupes d'homotopie non nuls. De tels espaces sont appelés des tours de Postnikov finies. On y étudie leurs groupes de cohomologie entière, une autre famille d'invariants, à l'instar des groupes d'homotopie. On mesure d'une certaine manière la taille d'un groupe de cohomologie à l'aide de la notion d'exposant; ainsi, un groupe de cohomologie possédant un exposant est relativement petit. L'un des résultats principaux de ce travail porte sur une étude de la taille des groupes de cohomologie des tours de Postnikov finies. Il s'agit du théorème suivant: un H-espace topologique 1-connexe 2-local et de type fini qui ne possède qu'un ou deux groupes d'homotopie non nuls n'a pas d'exposant pour son groupe gradué de cohomologie entière réduite. S'il fallait interpréter qualitativement ce résultat, on pourrait dire que plus un espace est petit du point de vue de la cohomologie (c.-à-d. s'il possède un exposant cohomologique), plus il est intéressant du point de vue de l'homotopie (c.-à-d. il aura plus de deux groupes d'homotopie non nuls). Il ressort de mon travail que de tels espaces sont très intéressants dans le sens où ils peuvent avoir une infinité de groupes d'homotopie non nuls. Jean-Pierre Serre, médaillé Fields en 1954, a montré que toutes les sphères de dimension >1 ont une infinité de groupes d'homotopie non nuls. Des espaces avec un exposant cohomologique aux sphères, il n'y a qu'un pas à franchir...