22 resultados para Minimum Variance Model

em Université de Lausanne, Switzerland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of altitudinal and latitudinal tree-line ecotones to climate change, particularly that of temperature, has received much attention. To improve our understanding of the factors affecting tree-line position, we used the spatially explicit dynamic forest model TreeMig. Although well-suited because of its landscape dynamics functions, TreeMig features a parabolic temperature growth response curve, which has recently been questioned. and the species parameters are not specifically calibrated for cold temperatures. Our main goals were to improve the theoretical basis of the temperature growth response curve in the model and develop a method for deriving that curve's parameters from tree-ring data. We replaced the parabola with an asymptotic curve, calibrated for the main species at the subalpine (Swiss Alps: Pinus cembra, Larix decidua, Picea abies) and boreal (Fennoscandia: Pinus sylvestris, Betula pubescens, P. abies) tree-lines. After fitting new parameters, the growth curve matched observed tree-ring widths better. For the subalpine species, the minimum degree-day sum allowing, growth (kDDMin) was lowered by around 100 degree-days; in the case of Larix, the maximum potential ring-width was increased to 5.19 mm. At the boreal tree-line, the kDDMin for P. sylvestris was lowered by 210 degree-days and its maximum ring-width increased to 2.943 mm; for Betula (new in the model) kDDMin was set to 325 degree-days and the maximum ring-width to 2.51 mm; the values from the only boreal sample site for Picea were similar to the subalpine ones, so the same parameters were used. However, adjusting the growth response alone did not improve the model's output concerning species' distributions and their relative importance at tree-line. Minimum winter temperature (MinWiT, mean of the coldest winter month), which controls seedling establishment in TreeMig, proved more important for determining distribution. Picea, P. sylvestris and Betula did not previously have minimum winter temperature limits, so these values were set to the 95th percentile of each species' coldest MinWiT site (respectively -7, -11, -13). In a case study for the Alps, the original and newly calibrated versions of TreeMig were compared with biomass data from the National Forest Inventor), (NFI). Both models gave similar, reasonably realistic results. In conclusion, this method of deriving temperature responses from tree-rings works well. However, regeneration and its underlying factors seem more important for controlling species' distributions than previously thought. More research on regeneration ecology, especially at the upper limit of forests. is needed to improve predictions of tree-line responses to climate change further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Per definition, alcohol expectancies (after alcohol I expect X), and drinking motives (I drink to achieve X) are conceptually distinct constructs. Theorists have argued that motives mediate the association between expectancies and drinking outcomes. Yet, given the use of different instruments, do these constructs remain distinct when assessment items are matched? The present study tested to what extent motives mediated the link between expectancies and alcohol outcomes when identical items were used, first as expectancies and then as motives. A linear structural equation model was estimated based on a national representative sample of 5,779 alcohol-using students in Switzerland (mean age = 15.2 years). The results showed that expectancies explained up to 38% of the variance in motives. Together with motives, they explained up to 48% of the variance in alcohol outcomes (volume, 5+ drinking, and problems). In 10 of 12 outcomes, there was a significant mediated effect that was often higher than the direct expectancy effect. For coping, the expectancy effect was close to zero, indicating the strongest form of mediation. In only one case (conformity and 5+ drinking), there was a direct expectancy effect but no mediation. To conclude, the study demonstrates that motives are distinct from expectancies even when identical items are used. Motives are more proximally related to different alcohol outcomes, often mediating the effects of expectancies. Consequently, the effectiveness of interventions, particularly those aimed at coping drinkers, should be improved through a shift in focus from expectancies to drinking motives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the whole animal, metabolic regulations are set by reciprocal interactions between various organs, via the blood circulation. At present, analyses of such interactions require numerous and uneasily controlled in vivo experiments. In a search for an alternative to in vivo experiments, our work aims at developing a coculture system in which different cell types are isolated in polymer capsules and grown in a common environment. The signals exchanged between cells from various origins are, thus, reproducing the in vivo intertissular communications. With this perspective, we evaluated a new encapsulation system as an artificial housing for liver cells on the one hand and adipocytes on the other hand. Murine hepatocytes were encapsulated with specially designed multicomponent capsules formed by polyelectrolyte complexation between sodium alginate, cellulose sulphate and poly(methylene-coguanidine) hydrochloride, of which the permeability has been characterized. We demonstrated the absence of cytotoxicity and the excellent biocompatibility of these capsules towards primary culture of murine hepatocytes. Encapsulated hepatocytes retain their specific functions--transaminase activity, urea synthesis, and protein secretion--during the first four days of culture in minimum medium. Mature adipocytes, isolated from mouse epidydimal fat, were embedded in alginate beads. Measurement of protein secretion shows an identical profile between free and embedded adipocytes. We finally assessed the properties of encapsulated hepatocytes, cryopreserved over a periods of up to four months. The perspective of using encapsulated cells in coculture are discussed, since this system may represent a promising tool for fundamental research, such as analyses of drug metabolism, intercellular regulations, and metabolic pathways, as well as for the establishment of a tissue bank for storage and supply of murine hepatocytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La douleur neuropathique est définie comme une douleur causée par une lésion du système nerveux somato-sensoriel. Elle se caractérise par des douleurs exagérées, spontanées, ou déclenchées par des stimuli normalement non douloureux (allodynie) ou douloureux (hyperalgésie). Bien qu'elle concerne 7% de la population, ses mécanismes biologiques ne sont pas encore élucidés. L'étude des variations d'expressions géniques dans les tissus-clés des voies sensorielles (notamment le ganglion spinal et la corne dorsale de la moelle épinière) à différents moments après une lésion nerveuse périphérique permettrait de mettre en évidence de nouvelles cibles thérapeutiques. Elles se détectent de manière sensible par reverse transcription quantitative real-time polymerase chain reaction (RT- qPCR). Pour garantir des résultats fiables, des guidelines ont récemment recommandé la validation des gènes de référence utilisés pour la normalisation des données ("Minimum information for publication of quantitative real-time PCR experiments", Bustin et al 2009). Après recherche dans la littérature des gènes de référence fréquemment utilisés dans notre modèle de douleur neuropathique périphérique SNI (spared nerve injury) et dans le tissu nerveux en général, nous avons établi une liste de potentiels bons candidats: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) et L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) et hydroxymethyl-bilane synthase (HMBS). Nous avons évalué la stabilité d'expression de ces gènes dans le ganglion spinal et dans la corne dorsale à différents moments après la lésion nerveuse (SNI) en calculant des coefficients de variation et utilisant l'algorithme geNorm qui compare les niveaux d'expression entre les différents candidats et détermine la paire de gènes restante la plus stable. Il a aussi été possible de classer les gènes selon leur stabilité et d'identifier le nombre de gènes nécessaires pour une normalisation la plus précise. Les gènes les plus cités comme référence dans le modèle SNI ont été GAPDH, HMBS, Actb, HPRT1 et 18S. Seuls HPRT1 and 18S ont été précédemment validés dans des arrays de RT-qPCR. Dans notre étude, tous les gènes testés dans le ganglion spinal et dans la corne dorsale satisfont au critère de stabilité exprimé par une M-value inférieure à 1. Par contre avec un coefficient de variation (CV) supérieur à 50% dans le ganglion spinal, 18S ne peut être retenu. La paire de gènes la plus stable dans le ganglion spinal est HPRT1 et Actb et dans la corne dorsale il s'agit de RPL29 et RPL13a. L'utilisation de 2 gènes de référence stables suffit pour une normalisation fiable. Nous avons donc classé et validé Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 et 18S comme gènes de référence utilisables dans la corne dorsale pour le modèle SNI chez le rat. Dans le ganglion spinal 18S n'a pas rempli nos critères. Nous avons aussi déterminé que la combinaison de deux gènes de référence stables suffit pour une normalisation précise. Les variations d'expression génique de potentiels gènes d'intérêts dans des conditions expérimentales identiques (SNI, tissu et timepoints post SNI) vont pouvoir se mesurer sur la base d'une normalisation fiable. Non seulement il sera possible d'identifier des régulations potentiellement importantes dans la genèse de la douleur neuropathique mais aussi d'observer les différents phénotypes évoluant au cours du temps après lésion nerveuse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the role of the number of loci coding for a neutral trait on the release of additive variance for this trait after population bottlenecks. Different bottleneck sizes and durations were tested for various matrices of genotypic values, with initial conditions covering the allele frequency space. We used three different types of matrices. First, we extended Cheverud and Routman's model by defining matrices of "pure" epistasis for three and four independent loci; second, we used genotypic values drawn randomly from uniform, normal, and exponential distributions; and third we used two models of simple metabolic pathways leading to physiological epistasis. For all these matrices of genotypic values except the dominant metabolic pathway, we find that, as the number of loci increases from two to three and four, an increase in the release of additive variance is occurring. The amount of additive variance released for a given set of genotypic values is a function of the inbreeding coefficient, independently of the size and duration of the bottleneck. The level of inbreeding necessary to achieve maximum release in additive variance increases with the number of loci. We find that additive-by-additive epistasis is the type of epistasis most easily converted into additive variance. For a wide range of models, our results show that epistasis, rather than dominance, plays a significant role in the increase of additive variance following bottlenecks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY Heavy metal presence in the environment is a serious concern since some of them can be toxic to plants, animals and humans once accumulated along the food chain. Cadmium (Cd) is one of the most toxic heavy metal. It is naturally present in soils at various levels and its concentration can be increased by human activities. Several plants however have naturally developed strategies allowing them to grow on heavy metal enriched soils. One of them consists in the accumulation and sequestration of heavy metals in the above-ground biomass. Some plants present in addition an extreme strategy by which they accumulate a limited number of heavy metals in their shoots in amounts 100 times superior to those expected for a non-accumulating plant in the same conditions. Understanding the genetic basis of the hyperaccumulation trait - particularly for Cd - remains an important challenge which may lead to biotechnological applications in the soil phytoremediation. In this thesis, Thlaspi caerulescens J. & C. Presl (Brassicaceae) was used as a model plant to study the Cd hyperaccumulation trait, owing to its physiological and genetic characteristics. Twenty-four wild populations were sampled in different regions of Switzerland. They were characterized for environmental and soil parameters as well as intrinsic characteristics of plants (i.e. metal concentrations in shoots). They were as well genetically characterized by AFLPs, plastid DNA polymorphism and genes markers (CAPS and microsatellites) mainly developed in this thesis. Some of the investigated genes were putatively linked to the Cd hyperaccumulation trait. Since the study of the Cd hyperaccumulation in the field is important as it allows the identification of patterns of selection, the present work offered a methodology to define the Cd hyperaccumulation capacity of populations from different habitats permitting thus their comparison in the field. We showed that Cd, Zn, Fe and Cu accumulations were linked and that populations with higher Cd hyperaccumulation capacity had higher shoot and reproductive fitness. Using our genetic data, statistical methods (Beaumont & Nichols's procedure, partial Mantel tests) were applied to identify genomic signatures of natural selection related to the Cd hyperaccumulation capacity. A significant genetic difference between populations related to their Cd hyperaccumulation capacity was revealed based on somè specific markers (AFLP and candidate genes). Polymorphism at the gene encoding IRTl (Iron-transporter also participating to the transport of Zn) was suggested as explaining part of the variation in Cd hyperaccumulation capacity of populations supporting previous physiological investigations. RÉSUMÉ La présence de métaux lourds dans l'environnement est un phénomène préoccupant. En effet, certains métaux lourds - comme le cadmium (Cd) -sont toxiques pour les plantes, les animaux et enfin, accumulés le long de la chaîne alimentaire, pour les hommes. Le Cd est naturellement présent dans le sol et sa concentration peut être accrue par différentes activités humaines. Certaines plantes ont cependant développé des stratégies leur permettant de pousser sur des sols contaminés en métaux lourds. Parmi elles, certaines accumulent et séquestrent les métaux lourds dans leurs parties aériennes. D`autres présentent une stratégie encore plus extrême. Elles accumulent un nombre limité de métaux lourds en quantités 100 fois supérieures à celles attendues pour des espèces non-accumulatrices sous de mêmes conditions. La compréhension des bases génétiques de l'hyperaccumulation -particulièrement celle du Cd - représente un défi important avec des applications concrètes en biotechnologies, tout particulièrement dans le but appliqué de la phytoremediation des sols contaminés. Dans cette thèse, Thlaspi caerulescens J. & C. Presl (Brassicaceae) a été utilisé comme modèle pour l'étude de l'hyperaccumulation du Cd de par ses caractéristiques physiologiques et génétiques. Vingt-quatre populations naturelles ont été échantillonnées en Suisse et pour chacune d'elles les paramètres environnementaux, pédologique et les caractéristiques intrinsèques aux plantes (concentrations en métaux lourds) ont été déterminés. Les populations ont été caractérisées génétiquement par des AFLP, des marqueurs chloroplastiques et des marqueurs de gènes spécifiques, particulièrement ceux potentiellement liés à l'hyperaccumulation du Cd (CAPS et microsatellites). La plupart ont été développés au cours de cette thèse. L'étude de l'hyperaccumulation du Cd en conditions naturelles est importante car elle permet d'identifier la marque, éventuelle de sélection naturelle. Ce travail offre ainsi une méthodologie pour définir et comparer la capacité des populations à hyperaccumuler le Cd dans différents habitats. Nous avons montré que les accumulations du Cd, Zn, Fe et Cu sont liées et que les populations ayant une grande capacité d'hyperaccumuler le Cd ont également une meilleure fitness végétative et reproductive. Des méthodes statistiques (l'approche de Beaumont & Nichols, tests de Martel partiels) ont été utilisées sur les données génétiques pour identifier la signature génomique de la sélection naturelle liée à la capacité d'hyperaccumuler le Cd. Une différenciation génétique des populations liée à leur capacité d'hyperaccumuler le Cd a été mise en évidence sur certains marqueurs spécifiques. En accord avec les études physiologiques connues, le polymorphisme au gène codant IRT1 (un transporteur de Fe impliqué dans le transport du Zn) pourrait expliquer une partie de la variance de la capacité des populations à hyperaccumuler le Cd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. METHODS: To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. RESULTS: The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. CONCLUSIONS: Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. We investigated experimentally predation by the flatworm Dugesia lugubris on the snail Physa acuta in relation to predator body length and to prey morphology [shell length (SL) and aperture width (AW)]. 2. SL and AW correlate strongly in the field, but display significant and independent variance among populations. In the laboratory, predation by Dugesia resulted in large and significant selection differentials on both SL and AW. Analysis of partial effects suggests that selection on AW was indirect, and mediated through its strong correlation with SL. 3. The probability P(ij) for a snail of size category i (SL) to be preyed upon by a flatworm of size category j was fitted with a Poisson-probability distribution, the mean of which increased linearly with predator size (i). Despite the low number of parameters, the fit was excellent (r2 = 0.96). We offer brief biological interpretations of this relationship with reference to optimal foraging theory. 4. The largest size class of Dugesia (>2 cm) did not prey on snails larger than 7 mm shell length. This size threshold might offer Physa a refuge against flatworm predation and thereby allow coexistence in the field. 5. Our results are further discussed with respect to previous field and laboratory observations on P acuta life-history patterns, in particular its phenotypic variance in adult body size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We asked whether locally applied recombinant-Bone Morphogenic Protein-2 (rh-BMP-2) with an absorbable Type I collagen sponge (ACS) carrier could enhance the consolidation phase in a callotasis model. We performed unilateral transverse osteotomy of the tibia in 21 immature male rabbits. After a latency period of 7 days, a 3-weeks distraction was begun at a rate of 0.5mm/12h. At the end of the distraction period (Day 28) animals were randomly divided into three groups and underwent a second surgical procedure: 6 rabbits in Group I (Control group; the callus was exposed and nothing was added), 6 rabbits in Group II (ACS group; receiving the absorbable collagen sponge soaked with saline) and 9 rabbits in Group III (rh-BMP-2/ACS group; receiving the ACS soaked with 100μg/kg of rh-BMP-2, Inductos(®), Medtronic). Starting at Day 28 we assessed quantitative and qualitative radiographic parameters as well as densitometric parameters every two weeks (Days 28, 42, 56, 70 and 84). Animals were sacrificed after 8 weeks of consolidation (Day 84). Qualitative radiographic evaluation revealed hypertrophic calluses in the Group III animals. The rh-BMP-2/ACS also influenced the development of the cortex of the calluses as shown by the modified radiographic patterns in Group III when compared to Groups I and II. Densitometric analysis revealed the bone mineral content (BMC) was significantly higher in the rh-BMP-2/ACS treated animals (Group III).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An equation is applied for calculating the expected persistence time of an unstructured population of the white-toothed shrew Crocidura russula from Preverenges, a suburban area in western Switzerland. Population abundance data from March and November between 1977 and 1988 were fit to the logistic density dependence model to estimate mean population growth rate as a function of population density. The variance in mean growth rate was approximated with two different models. The largest estimated persistence time was less than a few decades, the smallest less than 10 years. The results are sensitive to the magnitude of variance in population growth rate. Deviations from the logistic density dependence model in November are quite well explained by weather variables but those in March are uncorrelated with weather variables. Variability in population growth rates measured in winter months may be better explained by behavioural mechanisms. Environmental variability, dispersal of juveniles and refugia within the range of the population may contribute to its long-term survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.