78 resultados para One-shot information theory
Resumo:
Some models of sexual selection predict that individuals vary in their genetic quality and reveal some of this variation in their secondary sexual characteristics. Alpine whitefish (Coregonus sp.) develop breeding tubercles shortly before their spawning season. These tubercles are epidermal structures that are distributed regularly along the body sides of both males and females. There is still much unexplained variation in the size of breeding tubercles within both sexes and with much overlap between the sexes. It has been suggested that breeding tubercles function to maintain body contact between the mating partners during spawning, act as weapons for defence of spawning territories, or are sexual signals that reveal aspects of genetic quality. We took two samples of whitefish from their spawning place, one at the beginning and one around the peak of spawning season. We found that females have on average smaller breeding tubercles than males, and that tubercle size partly reveals the stage of gonad maturation. Two independent full-factorial breeding experiments revealed that embryo mortality was significantly influenced by male and female effects. This finding demonstrates that the males differed in their genetic quality (because offspring get nothing but genes from their fathers). Tubercle size was negatively linked to some aspects of embryo mortality in the first breeding experiment but not significantly so in the second. This lack of consistency adds to inconsistent results that were reported before and suggests that (i) some aspects of genetic quality are not revealed in breeding tubercles while others are, or (ii) individuals vary in their signaling strategies and the information content of breeding tubercles is not always reliable. Moreover, the fact that female whitefish have breeding tubercles of significant size while males seem to have few reasons to be choosy suggests that the tubercles might also serve some functions that are not linked to sexual signaling.
Resumo:
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
Resumo:
The purpose of this paper is to study the diffusion and transformation of scientific information in everyday discussions. Based on rumour models and social representations theory, the impact of interpersonal communication and pre-existing beliefs on transmission of the content of a scientific discovery was analysed. In three experiments, a communication chain was simulated to investigate how laypeople make sense of a genetic discovery first published in a scientific outlet, then reported in a mainstream newspaper and finally discussed in groups. Study 1 (N=40) demonstrated a transformation of information when the scientific discovery moved along the communication chain. During successive narratives, scientific expert terminology disappeared while scientific information associated with lay terminology persisted. Moreover, the idea of a discovery of a faithfulness gene emerged. Study 2 (N=70) revealed that transmission of the scientific message varied as a function of attitudes towards genetic explanations of behaviour (pro-genetics vs. anti-genetics). Pro-genetics employed more scientific terminology than anti-genetics. Study 3 (N=75) showed that endorsement of genetic explanations was related to descriptive accounts of the scientific information, whereas rejection of genetic explanations was related to evaluative accounts of the information.
Resumo:
OBJECTIVE: Although intracranial hypertension is one of the important prognostic factors after head injury, increased intracranial pressure (ICP) may also be observed in patients with favourable outcome. We have studied whether the value of ICP monitoring can be augmented by indices describing cerebrovascular pressure-reactivity and pressure-volume compensatory reserve derived from ICP and arterial blood pressure (ABP) waveforms. METHOD: 96 patients with intracranial hypertension were studied retrospectively: 57 with fatal outcome and 39 with favourable outcome. ABP and ICP waveforms were recorded. Indices of cerebrovascular reactivity (PRx) and cerebrospinal compensatory reserve (RAP) were calculated as moving correlation coefficients between slow waves of ABP and ICP, and between slow waves of ICP pulse amplitude and mean ICP, respectively. The magnitude of 'slow waves' was derived using ICP low-pass spectral filtration. RESULTS: The most significant difference was found in the magnitude of slow waves that was persistently higher in patients with a favourable outcome (p<0.00004). In patients who died ICP was significantly higher (p<0.0001) and cerebrovascular pressure-reactivity (described by PRx) was compromised (p<0.024). In the same patients, pressure-volume compensatory reserve showed a gradual deterioration over time with a sudden drop of RAP when ICP started to rise, suggesting an overlapping disruption of the vasomotor response. CONCLUSION: Indices derived from ICP waveform analysis can be helpful for the interpretation of progressive intracranial hypertension in patients after brain trauma.
Resumo:
Plain film radiography often underestimates the extent of injury in children with epiphyseal fracture. Especially Salter-Harris V fractures (crush fracture of the epiphyseal plate) are often primarily not detected. MRI of the ankle was performed in 10 children aged 9-17 (mean 14) years with suspected epiphyseal injury using 1.0-T Magnetom Expert. The fractures were classified according to the Salter-Harris-Rang-Odgen classification and compared with the results of plain radiography. In one case MRI could exclude epiphyseal injury; in four cases the MRI findings changed the therapeutic management. The visualisation of the fracture in three orthogonal planes and the possibility of detection of cartilage and ligamentous injury in MR imaging makes this method superior to conventional radiography and CT. With respect to radiation exposure MRI instead of CT should be used for the diagnosis of epiphyseal injuries in children.
Resumo:
Introduction: Though a trial of intrathecal (IT) therapy should always be performed before implantation of a definitive intrathecal pump, there is no agreement as to how this test should be performed. Ziconotide is trialed in most of cases with continuous IT administration using implanted catheters. Unlike other intrathecal drugs, there is little experience with single bolus IT injections of ziconotide. The aim of the study is to assess the feasibility of single-shot IT trialing with ziconotide. Patients and methods: Eleven consecutive patients with chronic neuropathic intractable pain were trialed with a single IT bolus of 2.5 mcg of ziconotide. Pain and side effects are monitored for at least 72 hours after the injection. Depending on the response, a second injection is given a week later, with either the same dose (if VAS decreased ≥50% without side effects), a higher dose of 3.75 mcg (if VAS decreased <50% without side effects) or a lower dose of 1.25 mcg (if VAS decreased ≥50% but with side effects). If VAS decreased less than 50% and side effects occurred, no further injection was performed. When VAS decreased >50% without side effects after the first or the second dose, the result is confirmed by one more injection of the same dose one week later. The trial is considered positive if two successive injections provide a VAS decreased more than 50% without side effects. Results: Eleven patients (6 females and 5 males) were included. Nine patients experienced modest or no pain relief. Four of these had significant side effects (dizziness, nausea, vomiting or abdominal pain) and had no further injection. In the others 5, one patient retired from study and four received a second injection of 3.75 mcg. The trial was negative in all 5 cases because of side effects (dizziness, drowsiness, weakness, muscle cramps), the pain decreased in only 2 patients. Two patients experienced profound pain relief with an IT injection of 2.5 mcg. One patient had no side effects and the other had dizziness and drowsiness that disappeared with an injection of 1.25 mcg. Pain relief without adverse effects was confirmed with the second injection. The trial was considered positive for those two patients. Discussion and conclusion: The response rate of 18% (2/11) is consistent with the success rate of a continuous infusion trialing with an implanted catheter. Single-shot injection of ziconotide may therefore predict efficacy.
Resumo:
The objective of this essay is to reflect on a possible relation between entropy and emergence. A qualitative, relational approach is followed. We begin by highlighting that entropy includes the concept of dispersal, relevant to our enquiry. Emergence in complex systems arises from the coordinated behavior of their parts. Coordination in turn necessitates recognition between parts, i.e., information exchange. What will be argued here is that the scope of recognition processes between parts is increased when preceded by their dispersal, which multiplies the number of encounters and creates a richer potential for recognition. A process intrinsic to emergence is dissolvence (aka submergence or top-down constraints), which participates in the information-entropy interplay underlying the creation, evolution and breakdown of higher-level entities.
Resumo:
We have explored the possibility of obtaining first-order permeability estimates for saturated alluvial sediments based on the poro-elastic interpretation of the P-wave velocity dispersion inferred from sonic logs. Modern sonic logging tools designed for environmental and engineering applications allow one for P-wave velocity measurements at multiple emitter frequencies over a bandwidth covering 5 to 10 octaves. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and typical emitter frequencies ranging from approximately 1 to 30 kHz, the observable velocity dispersion should be sufficiently pronounced to allow one for reliable first-order estimations of the permeability structure. The corresponding predictions have been tested on and verified for a borehole penetrating a typical surficial alluvial aquifer. In addition to multifrequency sonic logs, a comprehensive suite of nuclear and electrical logs, an S-wave log, a litholog, and a limited number laboratory measurements of the permeability from retrieved core material were also available. This complementary information was found to be essential for parameterizing the poro-elastic inversion procedure and for assessing the uncertainty and internal consistency of corresponding permeability estimates. Our results indicate that the thus obtained permeability estimates are largely consistent with those expected based on the corresponding granulometric characteristics, as well as with the available evidence form laboratory measurements. These findings are also consistent with evidence from ocean acoustics, which indicate that, over a frequency range of several orders-of-magnitude, the classical theory of poro-elasticity is generally capable of explaining the observed P-wave velocity dispersion in medium- to fine-grained seabed sediments
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
This paper evaluates the reception of Léon Walras' ideas in Russia before 1920. Despite an unfavourable institutional context, Walras was read by Russian economists. On the one hand, Bortkiewicz and Winiarski, who lived outside Russia and had the opportunity to meet and correspond with Walras, were first class readers and very good ambassadors for Walras' ideas, while on the other, the economists living in Russia were more selective in their readings. They restricted themselves to Walras' Elements of Pure Economics, in particular, its theory of exchange, while ignoring its theory of production. We introduce a cultural argument to explain their selective reading. JEL classification numbers: B 13, B 19.
Resumo:
In a recent paper, Traulsen and Nowak use a multilevel selection model to show that cooperation can be favored by group selection in finite populations [Traulsen A, Nowak M (2006) Proc Natl Acad Sci USA 103:10952-10955]. The authors challenge the view that kin selection may be an appropriate interpretation of their results and state that group selection is a distinctive process "that permeates evolutionary processes from the emergence of the first cells to eusociality and the economics of nations." In this paper, we start by addressing Traulsen and Nowak's challenge and demonstrate that all their results can be obtained by an application of kin selection theory. We then extend Traulsen and Nowak's model to life history conditions that have been previously studied. This allows us to highlight the differences and similarities between Traulsen and Nowak's model and typical kin selection models and also to broaden the scope of their results. Our retrospective analyses of Traulsen and Nowak's model illustrate that it is possible to convert group selection models to kin selection models without disturbing the mathematics describing the net effect of selection on cooperation.
Resumo:
Introduction Societies of ants, bees, wasps and termites dominate many terrestrial ecosystems (Wilson 1971). Their evolutionary and ecological success is based upon the regulation of internal conflicts (e.g. Ratnieks et al. 2006), control of diseases (e.g. Schmid-Hempel 1998) and individual skills and collective intelligence in resource acquisition, nest building and defence (e.g. Camazine 2001). Individuals in social species can pass on their genes not only directly trough their own offspring, but also indirectly by favouring the reproduction of relatives. The inclusive fitness theory of Hamilton (1963; 1964) provides a powerful explanation for the evolution of reproductive altruism and cooperation in groups with related individuals. The same theory also led to the realization that insect societies are subject to internal conflicts over reproduction. Relatedness of less-than-one is not sufficient to eliminate all incentive for individual selfishness. This would indeed require a relatedness of one, as found among cells of an organism (Hardin 1968; Keller 1999). The challenge for evolutionary biology is to understand how groups can prevent or reduce the selfish exploitation of resources by group members, and how societies with low relatedness are maintained. In social insects the evolutionary shift from single- to multiple queens colonies modified the relatedness structure, the dispersal, and the mode of colony founding (e.g. (Crozier & Pamilo 1996). In ants, the most common, and presumably ancestral mode of reproduction is the emission of winged males and females, which found a new colony independently after mating and dispersal flights (Hölldobler & Wilson 1990). The alternative reproductive tactic for ant queens in multiple-queen colonies (polygyne) is to seek to be re-accepted in their natal colonies, where they may remain as additional reproductives or subsequently disperse on foot with part of the colony (budding) (Bourke & Franks 1995; Crozier & Pamilo 1996; Hölldobler & Wilson 1990). Such ant colonies can contain up to several hundred reproductive queens with an even more numerous workforce (Cherix 1980; Cherix 1983). As a consequence in polygynous ants the relatedness among nestmates is very low, and workers raise brood of queens to which they are only distantly related (Crozier & Pamilo 1996; Queller & Strassmann 1998). Therefore workers could increase their inclusive fitness by preferentially caring for their closest relatives and discriminate against less related or foreign individuals (Keller 1997; Queller & Strassmann 2002; Tarpy et al. 2004). However, the bulk of the evidence suggests that social insects do not behave nepotistically, probably because of the costs entailed by decreased colony efficiency or discrimination errors (Keller 1997). Recently, the consensus that nepotistic behaviour does not occur in insect colonies was challenged by a study in the ant Formica fusca (Hannonen & Sundström 2003b) showing that the reproductive share of queens more closely related to workers increases during brood development. However, this pattern can be explained either by nepotism with workers preferentially rearing the brood of more closely related queens or intrinsic differences in the viability of eggs laid by queens. In the first chapter, we designed an experiment to disentangle nepotism and differences in brood viability. We tested if workers prefer to rear their kin when given the choice between highly related and unrelated brood in the ant F. exsecta. We also looked for differences in egg viability among queens and simulated if such differences in egg viability may mistakenly lead to the conclusion that workers behave nepotistically. The acceptance of queens in polygnous ants raises the question whether the varying degree of relatedness affects their share in reproduction. In such colonies workers should favour nestmate queens over foreign queens. Numerous studies have investigated reproductive skew and partitioning of reproduction among queens (Bourke et al. 1997; Fournier et al. 2004; Fournier & Keller 2001; Hammond et al. 2006; Hannonen & Sundström 2003a; Heinze et al. 2001; Kümmerli & Keller 2007; Langer et al. 2004; Pamilo & Seppä 1994; Ross 1988; Ross 1993; Rüppell et al. 2002), yet almost no information is available on whether differences among queens in their relatedness to other colony members affects their share in reproduction. Such data are necessary to compare the relative reproductive success of dispersing and non-dispersing individuals. Moreover, information on whether there is a difference in reproductive success between resident and dispersing queens is also important for our understanding of the genetic structure of ant colonies and the dynamics of within group conflicts. In chapter two, we created single-queen colonies and then introduced a foreign queens originating from another colony kept under similar conditions in order to estimate the rate of queen acceptance into foreign established colonies, and to quantify the reproductive share of resident and introduced queens. An increasing number of studies have investigated the discrimination ability between ant workers (e.g. Holzer et al. 2006; Pedersen et al. 2006), but few have addressed the recognition and discrimination behaviour of workers towards reproductive individuals entering colonies (Bennett 1988; Brown et al. 2003; Evans 1996; Fortelius et al. 1993; Kikuchi et al. 2007; Rosengren & Pamilo 1986; Stuart et al. 1993; Sundström 1997; Vásquez & Silverman in press). These studies are important, because accepting new queens will generally have a large impact on colony kin structure and inclusive fitness of workers (Heinze & Keller 2000). In chapter three, we examined whether resident workers reject young foreign queens that enter into their nest. We introduced mated queens into their natal nest, a foreign-female producing nest, or a foreign male-producing nest and measured their survival. In addition, we also introduced young virgin and mated queens into their natal nest to examine whether the mating status of the queens influences their survival and acceptance by workers. On top of polgyny, some ant species have evolved an extraordinary social organization called 'unicoloniality' (Hölldobler & Wilson 1977; Pedersen et al. 2006). In unicolonial ants, intercolony borders are absent and workers and queens mix among the physically separated nests, such that nests form one large supercolony. Super-colonies can become very large, so that direct cooperative interactions are impossible between individuals of distant nests. Unicoloniality is an evolutionary paradox and a potential problem for kin selection theory because the mixing of queens and workers between nests leads to extremely low relatedness among nestmates (Bourke & Franks 1995; Crozier & Pamilo 1996; Keller 1995). A better understanding of the evolution and maintenance of unicoloniality requests detailed information on the discrimination behavior, dispersal, population structure, and the scale of competition. Cryptic genetic population structure may provide important information on the relevant scale to be considered when measuring relatedness and the role of kin selection. Theoretical studies have shown that relatedness should be measured at the level of the `economic neighborhood', which is the scale at which intraspecific competition generally takes place (Griffin & West 2002; Kelly 1994; Queller 1994; Taylor 1992). In chapter four, we conducted alarge-scale study to determine whether the unicolonial ant Formica paralugubris forms populations that are organised in discrete supercolonies or whether there is a continuous gradation in the level of aggression that may correlate with genetic isolation by distance and/or spatial distance between nests. In chapter five, we investigated the fine-scale population structure in three populations of F. paralugubris. We have developed mitochondria) markers, which together with the nuclear markers allowed us to detect cryptic genetic clusters of nests, to obtain more precise information on the genetic differentiation within populations, and to separate male and female gene flow. These new data provide important information on the scale to be considered when measuring relatedness in native unicolonial populations.
Resumo:
The good news with regard to this (or any) chapter on the future of leadership is that there is one. There was a time when researchers called for a moratorium on new leadership theory and research (e.g., Miner, 1975) citing the uncertain future of the field. Then for a time there was a popular academic perspective that leadership did not really matter when it came to shaping organizational outcomes (Meindl & Ehrlich, 1987; Meindl, Ehrlich, & Dukerich, 1985; Pfeffer, 1977). That perspective was laid to rest by "realists" in the field (Day & Antonakis, 2012a) by means of empirical re-interpretation of the results used to support the position that leadership does not matter (Lieberson & O'Connor, 1972; Salancik & Pfeffer, 1977). Specifically, Day and Lord (1988) showed that when proper methodological concerns were addressed (e.g., controlling for industry and company size effects; incorporating appropriate time lags) that the impact of top-level leadership was considerable - explaining as much as 45% of the variance in measures of organizational performance. Despite some recent pessimistic sentiments about the "curiously unformed" state of leadership research and theory (Hackman & Wageman, 2007), others have argued that the field has continued to evolve and is potentially on the threshold of some significant breakthroughs (Day & Antonakis, 2012a). Leadership scholars have been re-energized by new directions in the field and research efforts have revitalized areas previously abandoned for apparent lack of consistency in findings (e.g., leadership trait theory). Our accumulated knowledge now allows us to explain the nature of leadership including its biological bases and other antecedents, and consequences with some degree of confidence. There are other comprehensive sources that review the extensive theoretical and empirical foundation of leadership (Bass, 2008; Day & Antonakis, 2012b) so that will not be the focus of the present chapter. Instead, we will take a future-oriented perspective in identifying particular areas within the leadership field that we believe offer promising perspectives on the future of leadership. Nonetheless, it is worthwhile as background to first provide an overview of how we see the leadership field changing over the past decade or so. This short chronicle will set the stage for a keener understanding of where the future contributions are likely to emerge. Overall, across nine major schools of leadership - trait, behavioural, contingency, contextual, relational, sceptics, information processing, New Leadership, biological and evolutionary - researchers have seen a resurgence in interest in one area, a high level of activity in at least four other areas, inactivity in three areas, and one that was modestly active in the previous decade but we think holds strong promise for the future (Gardner, Lowe, Moss, Mahoney, & Cogliser, 2010). We will next provide brief overviews of these nine schools and their respective levels of research activity (see Figure 1).
Resumo:
Abstract Textual autocorrelation is a broad and pervasive concept, referring to the similarity between nearby textual units: lexical repetitions along consecutive sentences, semantic association between neighbouring lexemes, persistence of discourse types (narrative, descriptive, dialogal...) and so on. Textual autocorrelation can also be negative, as illustrated by alternating phonological or morpho-syntactic categories, or the succession of word lengths. This contribution proposes a general Markov formalism for textual navigation, and inspired by spatial statistics. The formalism can express well-known constructs in textual data analysis, such as term-document matrices, references and hyperlinks navigation, (web) information retrieval, and in particular textual autocorrelation, as measured by Moran's I relatively to the exchange matrix associated to neighbourhoods of various possible types. Four case studies (word lengths alternation, lexical repulsion, parts of speech autocorrelation, and semantic autocorrelation) illustrate the theory. In particular, one observes a short-range repulsion between nouns together with a short-range attraction between verbs, both at the lexical and semantic levels. Résumé: Le concept d'autocorrélation textuelle, fort vaste, réfère à la similarité entre unités textuelles voisines: répétitions lexicales entre phrases successives, association sémantique entre lexèmes voisins, persistance du type de discours (narratif, descriptif, dialogal...) et ainsi de suite. L'autocorrélation textuelle peut être également négative, comme l'illustrent l'alternance entre les catégories phonologiques ou morpho-syntaxiques, ou la succession des longueurs de mots. Cette contribution propose un formalisme markovien général pour la navigation textuelle, inspiré par la statistique spatiale. Le formalisme est capable d'exprimer des constructions bien connues en analyse des données textuelles, telles que les matrices termes-documents, les références et la navigation par hyperliens, la recherche documentaire sur internet, et, en particulier, l'autocorélation textuelle, telle que mesurée par le I de Moran relatif à une matrice d'échange associée à des voisinages de différents types possibles. Quatre cas d'étude illustrent la théorie: alternance des longueurs de mots, répulsion lexicale, autocorrélation des catégories morpho-syntaxiques et autocorrélation sémantique. On observe en particulier une répulsion à courte portée entre les noms, ainsi qu'une attraction à courte portée entre les verbes, tant au niveau lexical que sémantique.
Resumo:
Sustainable resource use is one of the most important environmental issues of our times. It is closely related to discussions on the 'peaking' of various natural resources serving as energy sources, agricultural nutrients, or metals indispensable in high-technology applications. Although the peaking theory remains controversial, it is commonly recognized that a more sustainable use of resources would alleviate negative environmental impacts related to resource use. In this thesis, sustainable resource use is analysed from a practical standpoint, through several different case studies. Four of these case studies relate to resource metabolism in the Canton of Geneva in Switzerland: the aim was to model the evolution of chosen resource stocks and flows in the coming decades. The studied resources were copper (a bulk metal), phosphorus (a vital agricultural nutrient), and wood (a renewable resource). In addition, the case of lithium (a critical metal) was analysed briefly in a qualitative manner and in an electric mobility perspective. In addition to the Geneva case studies, this thesis includes a case study on the sustainability of space life support systems. Space life support systems are systems whose aim is to provide the crew of a spacecraft with the necessary metabolic consumables over the course of a mission. Sustainability was again analysed from a resource use perspective. In this case study, the functioning of two different types of life support systems, ARES and BIORAT, were evaluated and compared; these systems represent, respectively, physico-chemical and biological life support systems. Space life support systems could in fact be used as a kind of 'laboratory of sustainability' given that they represent closed and relatively simple systems compared to complex and open terrestrial systems such as the Canton of Geneva. The chosen analysis method used in the Geneva case studies was dynamic material flow analysis: dynamic material flow models were constructed for the resources copper, phosphorus, and wood. Besides a baseline scenario, various alternative scenarios (notably involving increased recycling) were also examined. In the case of space life support systems, the methodology of material flow analysis was also employed, but as the data available on the dynamic behaviour of the systems was insufficient, only static simulations could be performed. The results of the case studies in the Canton of Geneva show the following: were resource use to follow population growth, resource consumption would be multiplied by nearly 1.2 by 2030 and by 1.5 by 2080. A complete transition to electric mobility would be expected to only slightly (+5%) increase the copper consumption per capita while the lithium demand in cars would increase 350 fold. For example, phosphorus imports could be decreased by recycling sewage sludge or human urine; however, the health and environmental impacts of these options have yet to be studied. Increasing the wood production in the Canton would not significantly decrease the dependence on wood imports as the Canton's production represents only 5% of total consumption. In the comparison of space life support systems ARES and BIORAT, BIORAT outperforms ARES in resource use but not in energy use. However, as the systems are dimensioned very differently, it remains questionable whether they can be compared outright. In conclusion, the use of dynamic material flow analysis can provide useful information for policy makers and strategic decision-making; however, uncertainty in reference data greatly influences the precision of the results. Space life support systems constitute an extreme case of resource-using systems; nevertheless, it is not clear how their example could be of immediate use to terrestrial systems.