991 resultados para spatial error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: This study sought to establish an accurate and reproducible T(2)-mapping cardiac magnetic resonance (CMR) methodology at 3 T and to evaluate it in healthy volunteers and patients with myocardial infarct. BACKGROUND: Myocardial edema affects the T(2) relaxation time on CMR. Therefore, T(2)-mapping has been established to characterize edema at 1.5 T. A 3 T implementation designed for longitudinal studies and aimed at guiding and monitoring therapy remains to be implemented, thoroughly characterized, and evaluated in vivo. METHODS: A free-breathing navigator-gated radial CMR pulse sequence with an adiabatic T(2) preparation module and an empirical fitting equation for T(2) quantification was optimized using numerical simulations and was validated at 3 T in a phantom study. Its reproducibility for myocardial T(2) quantification was then ascertained in healthy volunteers and improved using an external reference phantom with known T(2). In a small cohort of patients with established myocardial infarction, the local T(2) value and extent of the edematous region were determined and compared with conventional T(2)-weighted CMR and x-ray coronary angiography, where available. RESULTS: The numerical simulations and phantom study demonstrated that the empirical fitting equation is significantly more accurate for T(2) quantification than that for the more conventional exponential decay. The volunteer study consistently demonstrated a reproducibility error as low as 2 ± 1% using the external reference phantom and an average myocardial T(2) of 38.5 ± 4.5 ms. Intraobserver and interobserver variability in the volunteers were -0.04 ± 0.89 ms (p = 0.86) and -0.23 ± 0.91 ms (p = 0.87), respectively. In the infarction patients, the T(2) in edema was 62.4 ± 9.2 ms and was consistent with the x-ray angiographic findings. Simultaneously, the extent of the edematous region by T(2)-mapping correlated well with that from the T(2)-weighted images (r = 0.91). CONCLUSIONS: The new, well-characterized 3 T methodology enables robust and accurate cardiac T(2)-mapping at 3 T with high spatial resolution, while the addition of a reference phantom improves reproducibility. This technique may be well suited for longitudinal studies in patients with suspected or established heart disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lutzomyia (Nyssomyia) whitmani s.l.is the main vector of cutaneous leishmaniasis in state of Mato Grosso, but little is known about environmental determinants of its spatial distribution on a regional scale. Entomologic surveys of this sand fly species, conducted between 1996 and 2001 in 41 state municipalities, were used to investigate the relationships between environmental factors and the presence of the species, and to develop a spatial model of habitat suitability. The relationship between averaged CDC light trap indexes and 15 environmental and socio-economic factors were tested by logistic regression (LR) analysis. Spatial layers of deforestation tax and the Brazilian index of gross net production (IGNP) were identified as significant explanatory variables for vector presence in the LR model, and these were then overlaid with habitat maps. The highest habitat suitability in 2001 was obtained for the heavily deforested areas in the Central-North, South, East, and Southwest of Mato Grosso, particularly in municipalities with lower IGNP values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A version of Matheron’s discrete Gaussian model is applied to cell composition data.The examples are for map patterns of felsic metavolcanics in two different areas. Q-Qplots of the model for cell values representing proportion of 10 km x 10 km cell areaunderlain by this rock type are approximately linear, and the line of best fit can be usedto estimate the parameters of the model. It is also shown that felsic metavolcanics in theAbitibi area of the Canadian Shield can be modeled as a fractal

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Praziquantel chemotherapy has been the focus of the Schistosomiasis Control Program in Brazil for the past two decades. Nevertheless, information on the impact of selective chemotherapy against Schistosoma mansoni infection under the conditions confronted by the health teams in endemic municipalities remains scarce. This paper compares the spatial pattern of infection before and after treatment with either a 40 mg/kg or 60 mg/kg dose of praziquantel by determining the intensity of spatial cluster among patients at 180 and 360 days after treatment. The spatial-temporal distribution of egg-positive patients was analysed in a Geographic Information System using the kernel smoothing technique. While all patients became egg-negative after 21 days, 17.9% and 30.9% reverted to an egg-positive condition after 180 and 360 days, respectively. Both the prevalence and intensity of infection after treatment were significantly lower in the 60 mg/kg than in the 40 mg/kg treatment group. The higher intensity of the kernel in the 40 mg/kg group compared to the 60 mg/kg group, at both 180 and 360 days, reflects the higher number of reverted cases in the lower dose group. Auxiliary, preventive measures to control transmission should be integrated with chemotherapy to achieve a more enduring impact.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest fire sequences can be modelled as a stochastic point process where events are characterized by their spatial locations and occurrence in time. Cluster analysis permits the detection of the space/time pattern distribution of forest fires. These analyses are useful to assist fire-managers in identifying risk areas, implementing preventive measures and conducting strategies for an efficient distribution of the firefighting resources. This paper aims to identify hot spots in forest fire sequences by means of the space-time scan statistics permutation model (STSSP) and a geographical information system (GIS) for data and results visualization. The scan statistical methodology uses a scanning window, which moves across space and time, detecting local excesses of events in specific areas over a certain period of time. Finally, the statistical significance of each cluster is evaluated through Monte Carlo hypothesis testing. The case study is the forest fires registered by the Forest Service in Canton Ticino (Switzerland) from 1969 to 2008. This dataset consists of geo-referenced single events including the location of the ignition points and additional information. The data were aggregated into three sub-periods (considering important preventive legal dispositions) and two main ignition-causes (lightning and anthropogenic causes). Results revealed that forest fire events in Ticino are mainly clustered in the southern region where most of the population is settled. Our analysis uncovered local hot spots arising from extemporaneous arson activities. Results regarding the naturally-caused fires (lightning fires) disclosed two clusters detected in the northern mountainous area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the distribution and composition of species assemblages and being able to predict them in space and time are highly important tasks io investigate the fate of biodiversity in the current global changes context. Species distribution models are tools that have proven useful to predict the potential distribution of species by relating their occurrences to environmental variables. Species assemblages can then be predicted by combining the prediction of individual species models. In the first part of my thesis, I tested the importance of new environmental predictors to improve species distribution prediction. I showed that edaphic variables, above all soil pH and nitrogen content could be important in species distribution models. In a second chapter, I tested the influence of different resolution of predictors on the predictive ability of species distribution models. I showed that fine resolution predictors could ameliorate the models for some species by giving a better estimation of the micro-topographic condition that species tolerate, but that fine resolution predictors for climatic factors still need to be ameliorated. The second goal of my thesis was to test the ability of empirical models to predict species assemblages' characteristics such as species richness or functional attributes. I showed that species richness could be modelled efficiently and that the resulting prediction gave a more realistic estimate of the number of species than when obtaining it by stacking outputs of single species distribution models. Regarding the prediction of functional characteristics (plant height, leaf surface, seed mass) of plant assemblages, mean and extreme values of functional traits were better predictable than indices reflecting the diversity of traits in the community. This approach proved interesting to understand which environmental conditions influence particular aspects of the vegetation functioning. It could also be useful to predict climate change impacts on the vegetation. In the last part of my thesis, I studied the capacity of stacked species distribution models to predict the plant assemblages. I showed that this method tended to over-predict the number of species and that the composition of the community was not predicted exactly either. Finally, I combined the results of macro- ecological models obtained in the preceding chapters with stacked species distribution models and showed that this approach reduced significantly the number of species predicted and that the prediction of the composition is also ameliorated in some cases. These results showed that this method is promising. It needs now to be tested on further data sets. - Comprendre la manière dont les plantes se répartissent dans l'environnement et s'organisent en communauté est une question primordiale dans le contexte actuel de changements globaux. Cette connaissance peut nous aider à sauvegarder la diversité des espèces et les écosystèmes. Des méthodes statistiques nous permettent de prédire la distribution des espèces de plantes dans l'espace géographique et dans le temps. Ces modèles de distribution d'espèces, relient les occurrences d'une espèce avec des variables environnementales pour décrire sa distribution potentielle. Cette méthode a fait ses preuves pour ce qui est de la prédiction d'espèces individuelles. Plus récemment plusieurs tentatives de cumul de modèles d'espèces individuelles ont été réalisées afin de prédire la composition des communautés végétales. Le premier objectif de mon travail est d'améliorer les modèles de distribution en testant l'importance de nouvelles variables prédictives. Parmi différentes variables édaphiques, le pH et la teneur en azote du sol se sont avérés des facteurs non négligeables pour prédire la distribution des plantes. Je démontre aussi dans un second chapitre que les prédicteurs environnementaux à fine résolution permettent de refléter les conditions micro-topographiques subies par les plantes mais qu'ils doivent encore être améliorés avant de pouvoir être employés de manière efficace dans les modèles. Le deuxième objectif de ce travail consistait à étudier le développement de modèles prédictifs pour des attributs des communautés végétales tels que, par exemple, la richesse en espèces rencontrée à chaque point. Je démontre qu'il est possible de prédire par ce biais des valeurs de richesse spécifiques plus réalistes qu'en sommant les prédictions obtenues précédemment pour des espèces individuelles. J'ai également prédit dans l'espace et dans le temps des caractéristiques de la végétation telles que sa hauteur moyenne, minimale et maximale. Cette approche peut être utile pour comprendre quels facteurs environnementaux promeuvent différents types de végétation ainsi que pour évaluer les changements à attendre au niveau de la végétation dans le futur sous différents régimes de changements climatiques. Dans une troisième partie de ma thèse, j'ai exploré la possibilité de prédire les assemblages de plantes premièrement en cumulant les prédictions obtenues à partir de modèles individuels pour chaque espèce. Cette méthode a le défaut de prédire trop d'espèces par rapport à ce qui est observé en réalité. J'ai finalement employé le modèle de richesse en espèce développé précédemment pour contraindre les résultats du modèle d'assemblage de plantes. Cela a permis l'amélioration des modèles en réduisant la sur-prédiction et en améliorant la prédiction de la composition en espèces. Cette méthode semble prometteuse mais de nouveaux tests sont nécessaires pour bien évaluer ses capacités.