795 resultados para HABITAT COMPLEXITY
Resumo:
We propose a multivariate approach to the study of geographic species distribution which does not require absence data. Building on Hutchinson's concept of the ecological niche, this factor analysis compares, in the multidimensional space of ecological variables, the distribution of the localities where the focal species was observed to a reference set describing the whole study area. The first factor extracted maximizes the marginality of the focal species, defined as the ecological distance between the species optimum and the mean habitat within the reference area. The other factors maximize the specialization of this focal species, defined as the ratio of the ecological variance in mean habitat to that observed for the focal species. Eigenvectors and eigenvalues are readily interpreted and can be used to build habitat-suitability maps. This approach is recommended in Situations where absence data are not available (many data banks), unreliable (most cryptic or rare species), or meaningless (invaders). We provide an illustration and validation of the method for the alpine ibex, a species reintroduced in Switzerland which presumably has not yet recolonized its entire range.
Resumo:
Différentes organisations et différents pays aboutissent souvent à des conclusions différentes quant à la pertinence d'introduire un test de dépistage génétique dans la population générale. Cet article décrit la complexité du dépistage basé sur des tests génétiques. Utilisant l'exemple de la mucoviscidose - pour laquelle un groupe de travail national est en train d'évaluer la pertinence d'un dépistage génétique - les auteurs relèvent les situaions où les recommandations de dépistage sont parfois basées sur l'émergence de nouvelles technologies (par exemple, test génétique) et d'opinion publique plutôt que sur la base d'évidences. Ils présentent également les enjeux éthiques et économiques du dépistage génétique de la mucoviscidose. [Abstract] Various institutions and countries often reach different conclusions about the utility of introducing a newborn screening test in the general population. This paper highlights the complexity of population screening including genetic tests. Using the example of cystic fibrosis genetic screening, for which a Swiss Working Group for Cystic Fibrosis is currently evaluating the pertinence, we outline that screening recommendations are often based more on expert opinion and emerging new technologies rather than on evidence. We also present some ethical and economic issues related to cystic fibrosis genetic screening.
Resumo:
This paper presents a programming environment for supporting learning in STEM, particularly mobile robotic learning. It was designed to maintain progressive learning for people with and without previous knowledge of programming and/or robotics. The environment was multi platform and built with open source tools. Perception, mobility, communication, navigation and collaborative behaviour functionalities can be programmed for different mobile robots. A learner is able to programme robots using different programming languages and editor interfaces: graphic programming interface (basic level), XML-based meta language (intermediate level) or ANSI C language (advanced level). The environment supports programme translation transparently into different languages for learners or explicitly on learners’ demand. Learners can access proposed challenges and learning interfaces by examples. The environment was designed to allow characteristics such as extensibility, adaptive interfaces, persistence and low software/hardware coupling. Functionality tests were performed to prove programming environment specifications. UV BOT mobile robots were used in these tests
Resumo:
Les recherches menées ces dernières années sur le site de Lattes ont livré une documentation abondante sur l’architecture et l’organisation de l’espace domestique du ve s. et surtout du ive s. av. J.-C. C’est en effet durant cette période qu’a eu lieu la mise en place de la trame urbaine de la ville qui, dans ses grandes lignes, perdurera jusqu’à la fin de l’occupation protohistorique du site ; de même, c’est à ce moment qu’apparaissent des techniques de construction dont certaines sont exclusives de ces périodes, mais qui pour d’autres resteront en vigueur jusqu’à la Protohistoire récente. Cet article présente une synthèse de nos connaissances sur cette phase ancienne, en insistant sur les permanences, les modifications ou les évolutions dans l’architecture, la typologie des maisons et les pratiques domestiques au cours de ces étapes et par rapport aux étapes postérieures.
Resumo:
documented accurately since 1960. Most records are based on nest findings and there have been few direct observations or captures, mainly because live trapping of this species is not simple. Therefore, an efficient trapping technique is needed for population studies and to facilitate the management of its habitat. By combining the methods used to capture very small (Suncus etruscus) and climbing (Muscardinus avellanarius) mammals, we developed a design using Longworth traps with mouse excluders set on suspended platforms. This allowed us to trap more harvest mice in four field sessions of 60 trap-nights than have ever been caught previously since its discovery in Switzerland.
Resumo:
In this paper we describe a taxonomy of task demands which distinguishes between Task Complexity, Task Condition and Task Difficulty. We then describe three theoretical claims and predictions of the Cognition Hypothesis (Robinson 2001, 2003b, 2005a) concerning the effects of task complexity on: (a) language production; (b) interaction and uptake of information available in the input to tasks; and (c) individual differences-task interactions. Finally we summarize the findings of the empirical studies in this special issue which all address one or more of these predictions and point to some directions for continuing, future research into the effects of task complexity on learning and performance.
Resumo:
The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE).
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
This paper analyses the effects of manipulating the cognitive complexity of L2 oral tasks on language production. It specifically focuses on self-repairs, which are taken as a measure of accuracy since they denote both attention to form and an attempt at being accurate. By means of a repeated measures de- sign, 42 lower-intermediate students were asked to perform three different tasks types (a narrative, and instruction-giving task, and a decision-making task) for which two degrees of cognitive complexity were established. The narrative task was manipulated along +/− Here-and-Now, an instruction-giving task ma- nipulated along +/− elements, and the decision-making task which is manipu- lated along +/− reasoning demands. Repeated measures ANOVAs are used for the calculation of differences between degrees of complexity and among task types. One-way ANOVA are used to detect potential differences between low- proficiency and high-proficiency participants. Results show an overall effect of Task Complexity on self-repairs behavior across task types, with different be- haviors existing among the three task types. No differences are found between the self-repair behavior between low and high proficiency groups. Results are discussed in the light of theories of cognition and L2 performance (Robin- son 2001a, 2001b, 2003, 2005, 2007), L1 and L2 language production models (Levelt 1989, 1993; Kormos 2000, 2006), and attention during L2 performance (Skehan 1998; Robinson, 2002).