38 resultados para Resource-based and complementarity theory
em Université de Lausanne, Switzerland
Resumo:
Summary Forests are key ecosystems of the earth and associated with a large range of functions. Many of these functions are beneficial to humans and are referred to as ecosystem services. Sustainable development requires that all relevant ecosystem services are quantified, managed and monitored equally. Natural resource management therefore targets the services associated with ecosystems. The main hypothesis of this thesis is that the spatial and temporal domains of relevant services do not correspond to a discrete forest ecosystem. As a consequence, the services are not quantified, managed and monitored in an equal and sustainable manner. The thesis aims were therefore to test this hypothesis, establish an improved conceptual approach and provide spatial applications for the relevant land cover and structure variables. The study was carried out in western Switzerland and based primarily on data from a countrywide landscape inventory. This inventory is part of the third Swiss national forest inventory and assesses continuous landscape variables based on a regular sampling of true colour aerial imagery. In addition, land cover variables were derived from Landsat 5 TM passive sensor data and land structure variables from active sensor data from a small footprint laserscanning system. The results confirmed the main hypothesis, as relevant services did not scale well with the forest ecosystem. Instead, a new conceptual approach for sustainable management of natural resources was described. This concept quantifies the services as a continuous function of the landscape, rather than a discrete function of the forest ecosystem. The explanatory landscape variables are therefore called continuous fields and the forest becomes a dependent and function-driven management unit. Continuous field mapping methods were established for land cover and structure variables. In conclusion, the discrete forest ecosystem is an adequate planning and management unit. However, monitoring the state of and trends in sustainability of services requires them to be quantified as a continuous function of the landscape. Sustainable natural resource management iteratively combines the ecosystem and gradient approaches. Résumé Les forêts sont des écosystèmes-clés de la terre et on leur attribue un grand nombre de fonctions. Beaucoup de ces fonctions sont bénéfiques pour l'homme et sont nommées services écosystémiques. Le développement durable exige que ces services écosystémiques soient tous quantifiés, gérés et surveillés de façon égale. La gestion des ressources naturelles a donc pour cible les services attribués aux écosystèmes. L'hypothèse principale de cette thèse est que les domaines spatiaux et temporels des services attribués à la forêt ne correspondent pas à un écosystème discret. Par conséquent, les services ne sont pas quantifiés, aménagés et surveillés d'une manière équivalente et durable. Les buts de la thèse étaient de tester cette hypothèse, d'établir une nouvelle approche conceptuelle de la gestion des ressources naturelles et de préparer des applications spatiales pour les variables paysagères et structurelles appropriées. L'étude a été menée en Suisse occidentale principalement sur la base d'un inventaire de paysage à l'échelon national. Cet inventaire fait partie du troisième inventaire forestier national suisse et mesure de façon continue des variables paysagères sur la base d'un échantillonnage régulier sur des photos aériennes couleur. En outre, des variables de couverture ? terrestre ont été dérivées des données d'un senseur passif Landsat 5 TM, ainsi que des variables structurelles, dérivées du laserscanning, un senseur actif. Les résultats confirment l'hypothèse principale, car l'échelle des services ne correspond pas à celle de l'écosystème forestier. Au lieu de cela, une nouvelle approche a été élaborée pour la gestion durable des ressources naturelles. Ce concept représente les services comme une fonction continue du paysage, plutôt qu'une fonction discrète de l'écosystème forestier. En conséquence, les variables explicatives de paysage sont dénommées continuous fields et la forêt devient une entité dépendante, définie par la fonction principale du paysage. Des méthodes correspondantes pour la couverture terrestre et la structure ont été élaborées. En conclusion, l'écosystème forestier discret est une unité adéquate pour la planification et la gestion. En revanche, la surveillance de la durabilité de l'état et de son évolution exige que les services soient quantifiés comme fonction continue du paysage. La gestion durable des ressources naturelles joint donc l'approche écosystémique avec celle du gradient de manière itérative.
Resumo:
1. Niche theory predicts that the stable coexistence of species within a guild should be associated, if resources are limited, with a mechanism of resource partitioning. Using extensive data on diets, the present study attempts: (i) to test the hypothesis that, in sympatry, the interspecific overlap between the trophic niches of the sibling bat species Myotis myotis and M. blythii-which coexist intimately in their roosts-is effectively lower than the two intraspecific overlaps; (ii) to assess the role played by interspecific competition in resource partitioning through the study of trophic niche displacement between several sympatric and allopatric populations. 2. Diets were determined by the analysis of faecal samples collected in the field from individual bats captured in various geographical areas. Trophic niche overlaps were calculated monthly for all possible intraspecific and interspecific pairs of individuals from sympatric populations. Niche breadth was estimated from: (i) every faecal sample; (ii) all the faecal samples collected per month in a given population (geographical area). 3. In every population, the bulk of the diets of M. myotis and M. blythii consisted of, respectively, terrestrial (e.g. carabid beetles) and grass-dwelling (mostly bush crickets) prey. All intraspecific trophic niche overlaps were significantly greater than the interspecific one, except in Switzerland in May when both species exploited mass concentrations of cockchafers, a non-limiting food source. This clearcut partitioning of resources may allow the stable, intimate coexistence observed under sympatric conditions. 4. Relative proportions of ground-and grass-dwelling prey, as well as niche breadths (either individual or population), did not differ significantly between sympatry and allopatry, showing that, under allopatric conditions, niche expansion does not take place. This suggests that active interspecific competition is not the underlying mechanism responsible for the niche partitioning which is currently observed between M. myotis and M. blythii.
Resumo:
Background and Aims The males and females of many dioecious plant species differ from one another in important life-history traits, such as their size. If male and female reproductive functions draw on different resources, for example, one should expect males and females to display different allocation strategies as they grow. Importantly, these strategies may differ not only between the two sexes, but also between plants of different age and therefore size. Results are presented from an experiment that asks whether males and females of Mercurialis annua, an annual plant with indeterminate growth, differ over time in their allocation of two potentially limiting resources (carbon and nitrogen) to vegetative (below-and above-ground) and reproductive tissues.Methods Comparisons were made of the temporal patterns of biomass allocation to shoots, roots and reproduction and the nitrogen content in the leaves between the sexes of M. annua by harvesting plants of each sex after growth over different periods of time.Key Results and Conclusions Males and females differed in their temporal patterns of allocation. Males allocated more to reproduction than females at early stages, but this trend was reversed at later stages. Importantly, males allocated proportionally more of their biomass towards roots at later stages, but the roots of females were larger in absolute terms. The study points to the important role played by both the timing of resource deployment and the relative versus absolute sizes of the sinks and sources in sexual dimorphism of an annual plant.
Resumo:
Résumé : Un nombre croissant de cas de malaria chez les voyageurs et migrants a été rapporté. Bien que l'analyse microscopique des frottis sanguins reste traditionnellement l'outil diagnostic de référence, sa fiabilité dépend considérablement de l'expertise de l'examinateur, pouvant elle-même faire défaut sous nos latitudes. Une PCR multiplex en temps réel a donc été développée en vue d'une standardisation du diagnostic. Un ensemble d'amorces génériques ciblant une région hautement conservée du gène d'ARN ribosomial 18S du genre Plasmodium a tout d'abord été conçu, dont le polymorphisme du produit d'amplification semblait suffisant pour créer quatre sondes spécifiques à l'espèce P. falciparum, P. malariae, P. vivax et P. ovale. Ces sondes utilisées en PCR en temps réel se sont révélées capables de détecter une seule copie de plasmide de P. falciparum, P. malariae, P. vivax et P. ovale spécifiquement. La même sensibilité a été obtenue avec une sonde de screening pouvant détecter les quatre espèces. Quatre-vingt-dix-sept échantillons de sang ont ensuite été testés, dont on a comparé la microscopie et la PCR en temps réel pour 66 (60 patients) d'entre eux. Ces deux méthodes ont montré une concordance globale de 86% pour la détection de plasmodia. Les résultats discordants ont été réévalués grâce à des données cliniques, une deuxième expertise microscopique et moléculaire (laboratoire de Genève et de l'Institut Suisse Tropical de Bâle), ainsi qu'à l'aide du séquençage. Cette nouvelle analyse s'est prononcé en faveur de la méthode moléculaire pour tous les neuf résultats discordants. Sur les 31 résultats positifs par les deux méthodes, la même réévaluation a pu donner raison 8 fois sur 9 à la PCR en temps réel sur le plan de l'identification de l'espèce plasmodiale. Les 31 autres échantillons ont été analysés pour le suivi de sept patients sous traitement antimalarique. Il a été observé une baisse rapide du nombre de parasites mesurée par la PCR en temps réel chez six des sept patients, baisse correspondant à la parasitémie déterminée microscopiquement. Ceci suggère ainsi le rôle potentiel de la PCR en temps réel dans le suivi thérapeutique des patients traités par antipaludéens. Abstract : There have been reports of increasing numbers of cases of malaria among migrants and travelers. Although microscopic examination of blood smears remains the "gold standard" in diagnosis, this method suffers from insufficient sensitivity and requires considerable expertise. To improve diagnosis, a multiplex real-time PCR was developed. One set of generic primers targeting a highly conserved region of the 18S rRNA gene of the genus Plasmodium was designed; the primer set was polymorphic enough internally to design four species-specific probes for P. falciparum, P. vivax, P. malarie, and P. ovale. Real-time PCR with species-specific probes detected one plasmid copy of P. falciparum, P. vivax, P. malariae, and P. ovale specifically. The same sensitivity was achieved for all species with real-time PCR with the 18S screening probe. Ninety-seven blood samples were investigated. For 66 of them (60 patients), microscopy and real-time PCR results were compared and had a crude agreement of 86% for the detection of plasmodia. Discordant results were reevaluated with clinical, molecular, and sequencing data to resolve them. All nine discordances between 18S screening PCR and microscopy were resolved in favor of the molecular method, as were eight of nine discordances at the species level for the species-specific PCR among the 31 samples positive by both methods. The other 31 blood samples were tested to monitor the antimalaria treatment in seven patients. The number of parasites measured by real-time PCR fell rapidly for six out of seven patients in parallel to parasitemia determined microscopically. This suggests a role of quantitative PCR for the monitoring of patients receiving antimalaria therapy.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Single-trial analysis of human electroencephalography (EEG) has been recently proposed for better understanding the contribution of individual subjects to a group-analysis effect as well as for investigating single-subject mechanisms. Independent Component Analysis (ICA) has been repeatedly applied to concatenated single-trial responses and at a single-subject level in order to extract those components that resemble activities of interest. More recently we have proposed a single-trial method based on topographic maps that determines which voltage configurations are reliably observed at the event-related potential (ERP) level taking advantage of repetitions across trials. Here, we investigated the correspondence between the maps obtained by ICA versus the topographies that we obtained by the single-trial clustering algorithm that best explained the variance of the ERP. To do this, we used exemplar data provided from the EEGLAB website that are based on a dataset from a visual target detection task. We show there to be robust correspondence both at the level of the activation time courses and at the level of voltage configurations of a subset of relevant maps. We additionally show the estimated inverse solution (based on low-resolution electromagnetic tomography) of two corresponding maps occurring at approximately 300 ms post-stimulus onset, as estimated by the two aforementioned approaches. The spatial distribution of the estimated sources significantly correlated and had in common a right parietal activation within Brodmann's Area (BA) 40. Despite their differences in terms of theoretical bases, the consistency between the results of these two approaches shows that their underlying assumptions are indeed compatible.