997 resultados para Extraction semi-automatique de termes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction : Les nourrissons, vu la grande compliance de leur cage thoracique, doivent maintenir activement leur volume pulmonaire de fin d’expiration (VPFE). Ceci se fait par interruption précoce de l’expiration, et par le freinage expiratoire au niveau laryngé et par la persistance de la contraction des muscles inspiratoires. Chez les nourrissons ventilés mécaniquement, notre équipe a montré que le diaphragme est activé jusqu’à la fin de l’expiration (activité tonique). Il n’est pas clair si cette activité tonique diaphragmatique compense pour l’absence de freinage laryngé liée à l’intubation endotrachéale. Objectif : Notre objectif est de déterminer si l’activité tonique diaphragmatique persiste après l’extubation chez les nourrissons et si elle peut être observée chez les enfants plus âgés. Méthode : Ceci est une étude observationnelle longitudinale prospective de patients âgés de 1 semaine à 18 ans admis aux soins intensifs pédiatriques (SIP), ventilés mécaniquement pour >24 heures et avec consentement parental. L’activité électrique du diaphragme (AEdi) a été enregistrée à l’aide d’une sonde nasogastrique spécifique à 4 moments durant le séjour aux SIP : en phase aigüe, pré et post-extubation et au congé. L’AEdi a été analysée de façon semi-automatique. L’AEdi tonique a été définie comme l’AEdi durant le dernier quartile de l’expiration. Résultats : 55 patients avec un âge médian de 10 mois (écart interquartile: 1-48) ont été étudiés. Chez les nourrissons (<1an, n=28), l’AEdi tonique en pourcentage de l’activité inspiratoire était de 48% (30-56) en phase aigüe, 38% (25-44) pré-extubation, 28% (17-42) post-extubation et 33% (22-43) au congé des SIP (p<0.05, ANOVA, avec différence significative entre enregistrements 1 et 3-4). Aucun changement significatif n’a été observé pré et post-extubation. L’AEdi tonique chez les patients plus âgés (>1an, n=27) était négligeable en phases de respiration normale (0.6mcv). Par contre, une AEdi tonique significative (>1mcv et >10%) a été observée à au moins un moment durant le séjour de 10 (37%) patients. La bronchiolite est le seul facteur indépendant associé à l’activité tonique diaphragmatique. Conclusion : Chez les nourrissons, l’AEdi tonique persiste après l’extubation et elle peut être réactivée dans certaines situations pathologiques chez les enfants plus âgés. Elle semble être un indicateur de l’effort du patient pour maintenir son VPFE. D’autres études devraient être menées afin de déterminer si la surveillance de l’AEdi tonique pourrait faciliter la détection de situations de ventilation inappropriée.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The automatic acquisition of lexical associations from corpora is a crucial issue for Natural Language Processing. A lexical association is a recurrent combination of words that co-occur together more often than expected by chance in a given domain. In fact, lexical associations define linguistic phenomena such as idiomes, collocations or compound words. Due to the fact that the sense of a lexical association is not compositionnal, their identification is fundamental for the realization of analysis and synthesis that take into account all the subtilities of the language. In this report, we introduce a new statistically-based architecture that extracts from naturally occurring texts contiguous and non contiguous. For that purpose, three new concepts have been defined : the positional N-gram models, the Mutual Expectation and the GenLocalMaxs algorithm. Thus, the initial text is fisrtly transformed in a set of positionnal N-grams i.e ordered vectors of simple lexical units. Then, an association measure, the Mutual Expectation, evaluates the degree of cohesion of each positional N-grams based on the identification of local maximum values of Mutual Expectation. Great efforts have also been carried out to evaluate our metodology. For that purpose, we have proposed the normalisation of five well-known association measures and shown that both the Mutual Expectation and the GenLocalMaxs algorithm evidence significant improvements comparing to existent metodologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ce travail porte sur la construction d’un corpus étalon pour l’évaluation automatisée des extracteurs de termes. Ces programmes informatiques, conçus pour extraire automatiquement les termes contenus dans un corpus, sont utilisés dans différentes applications, telles que la terminographie, la traduction, la recherche d’information, l’indexation, etc. Ainsi, leur évaluation doit être faite en fonction d’une application précise. Une façon d’évaluer les extracteurs consiste à annoter toutes les occurrences des termes dans un corpus, ce qui nécessite un protocole de repérage et de découpage des unités terminologiques. À notre connaissance, il n’existe pas de corpus annoté bien documenté pour l’évaluation des extracteurs. Ce travail vise à construire un tel corpus et à décrire les problèmes qui doivent être abordés pour y parvenir. Le corpus étalon que nous proposons est un corpus entièrement annoté, construit en fonction d’une application précise, à savoir la compilation d’un dictionnaire spécialisé de la mécanique automobile. Ce corpus rend compte de la variété des réalisations des termes en contexte. Les termes sont sélectionnés en fonction de critères précis liés à l’application, ainsi qu’à certaines propriétés formelles, linguistiques et conceptuelles des termes et des variantes terminologiques. Pour évaluer un extracteur au moyen de ce corpus, il suffit d’extraire toutes les unités terminologiques du corpus et de comparer, au moyen de métriques, cette liste à la sortie de l’extracteur. On peut aussi créer une liste de référence sur mesure en extrayant des sous-ensembles de termes en fonction de différents critères. Ce travail permet une évaluation automatique des extracteurs qui tient compte du rôle de l’application. Cette évaluation étant reproductible, elle peut servir non seulement à mesurer la qualité d’un extracteur, mais à comparer différents extracteurs et à améliorer les techniques d’extraction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

S’insérant dans les domaines de la Lecture et de l’Analyse de Textes Assistées par Ordinateur (LATAO), de la Gestion Électronique des Documents (GÉD), de la visualisation de l’information et, en partie, de l’anthropologie, cette recherche exploratoire propose l’expérimentation d’une méthodologie descriptive en fouille de textes afin de cartographier thématiquement un corpus de textes anthropologiques. Plus précisément, nous souhaitons éprouver la méthode de classification hiérarchique ascendante (CHA) pour extraire et analyser les thèmes issus de résumés de mémoires et de thèses octroyés de 1985 à 2009 (1240 résumés), par les départements d’anthropologie de l’Université de Montréal et de l’Université Laval, ainsi que le département d’histoire de l’Université Laval (pour les résumés archéologiques et ethnologiques). En première partie de mémoire, nous présentons notre cadre théorique, c'est-à-dire que nous expliquons ce qu’est la fouille de textes, ses origines, ses applications, les étapes méthodologiques puis, nous complétons avec une revue des principales publications. La deuxième partie est consacrée au cadre méthodologique et ainsi, nous abordons les différentes étapes par lesquelles ce projet fut conduit; la collecte des données, le filtrage linguistique, la classification automatique, pour en nommer que quelques-unes. Finalement, en dernière partie, nous présentons les résultats de notre recherche, en nous attardant plus particulièrement sur deux expérimentations. Nous abordons également la navigation thématique et les approches conceptuelles en thématisation, par exemple, en anthropologie, la dichotomie culture ̸ biologie. Nous terminons avec les limites de ce projet et les pistes d’intérêts pour de futures recherches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bovine coronavirus (BCoV) is a member of the group 2 of the Coronavirus (Nidovirales: Coronaviridae) and the causative agent of enteritis in both calves and adult bovine, as well as respiratory disease in calves. The present study aimed to develop a semi-nested RT-PCR for the detection of BCoV based on representative up-to-date sequences of the nucleocapsid gene, a conserved region of coronavirus genome. Three primers were designed, the first round with a 463bp and the second (semi-nested) with a 306bp predicted fragment. The analytical sensitivity was determined by 10-fold serial dilutions of the BCoV Kakegawa strain (HA titre: 256) in DEPC treated ultra-pure water, in fetal bovine serum (FBS) and in a BCoV-free fecal suspension, when positive results were found up to the 10-2, 10-3 and 10-7 dilutions, respectively, which suggests that the total amount of RNA in the sample influence the precipitation of pellets by the method of extraction used. When fecal samples was used, a large quantity of total RNA serves as carrier of BCoV RNA, demonstrating a high analytical sensitivity and lack of possible substances inhibiting the PCR. The final semi-nested RT-PCR protocol was applied to 25 fecal samples from adult cows, previously tested by a nested RT-PCR RdRp used as a reference test, resulting in 20 and 17 positives for the first and second tests, respectively, and a substantial agreement was found by kappa statistics (0.694). The high sensitivity and specificity of the new proposed method and the fact that primers were designed based on current BCoV sequences give basis to a more accurate diagnosis of BCoV-caused diseases, as well as to further insights on protocols for the detection of other Coronavirus representatives of both Animal and Public Health importance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many tropical tree species produce growth rings in response to seasonal environmental factors that influence the activity of the vascular cambium. We applied the following methods to analyze the annual nature of treering formation of 24 tree species from a seasonal semi-deciduous forest of southeast Brazil: describing wood anatomy and phenology, counting tree rings after cambium markings, and using permanent dendrometer bands. After 7 years of systematic observations and measurements, we found the following: the trees lost their leaves during the dry season and grew new leaves at the end of the same season; trunk increment dynamics corresponded to seasonal changes in precipitation, with higher increment (active period) during the rainy season (October-April) and lower increment (dormant period) during the dry season (May-September); the number of tree rings formed after injuries to the cambium coincided with the number of years since the extraction of the wood samples. As a result of these observations, it was concluded that most study trees formed one growth ring per year. This suggests that tree species from the seasonal semi-deciduous forests of Brazil have an annual cycle of wood formation. Therefore, these trees have potential for use in future studies of tree age and radial growth rates, as well as to infer ecological and regional climatic conditions. These future studies can provide important information for the management and conservation of these endangered forests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems approaches can help to evaluate and improve the agronomic and economic viability of nitrogen application in the frequently water-limited environments. This requires a sound understanding of crop physiological processes and well tested simulation models. Thus, this experiment on spring wheat aimed to better quantify water x nitrogen effects on wheat by deriving some key crop physiological parameters that have proven useful in simulating crop growth. For spring wheat grown in Northern Australia under four levels of nitrogen (0 to 360 kg N ha(-1)) and either entirely on stored soil moisture or under full irrigation, kernel yields ranged from 343 to 719 g m(-2). Yield increases were strongly associated with increases in kernel number (9150-19950 kernels m(-2)), indicating the sensitivity of this parameter to water and N availability. Total water extraction under a rain shelter was 240 mm with a maximum extraction depth of 1.5 m. A substantial amount of mineral nitrogen available deep in the profile (below 0.9 m) was taken up by the crop. This was the source of nitrogen uptake observed after anthesis. Under dry conditions this late uptake accounted for approximately 50% of total nitrogen uptake and resulted in high (>2%) kernel nitrogen percentages even when no nitrogen was applied,Anthesis LAI values under sub-optimal water supply were reduced by 63% and under sub-optimal nitrogen supply by 50%. Radiation use efficiency (RUE) based on total incident short-wave radiation was 1.34 g MJ(-1) and did not differ among treatments. The conservative nature of RUE was the result of the crop reducing leaf area rather than leaf nitrogen content (which would have affected photosynthetic activity) under these moderate levels of nitrogen limitation. The transpiration efficiency coefficient was also conservative and averaged 4.7 Pa in the dry treatments. Kernel nitrogen percentage varied from 2.08 to 2.42%. The study provides a data set and a basis to consider ways to improve simulation capabilities of water and nitrogen effects on spring wheat. (C) 1997 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this research is to exploit the possibility of using an ex situ solvent extraction technique for the remediation of soils contaminated with semi-volatile petroleum hydrocarbons. The composition of the organic phase was chosen in order to form a single phase mixture with an aqueous phase and simultaneously not being disturbed (forming stable emulsions) by the soil particles hauling the contaminants. It should also permit a regeneration of the organic solvent phase. As first, we studied the miscibility domain of the chosen ternary systems constituted by ethyl acetate–acetone–water. This system proved to satisfy the previous requirements allowing for the formation of a single liquid phase mixture within a large spectrum of compositions, and also allowing for an intimate contact with the soil. Contaminants in the diesel range within different functional groups were selected: xylene, naphthalene and hexadecane. The analytical control was done by gas chromatography with FID detector. The kinetics of the extractions proved to be fast, leading to equilibrium after 10 min. The effect of the solid–liquid ratio on the extraction efficiency was studied. Lower S/L ratios (1:8, w/v) proved to be more efficient, reaching recoveries in the order of 95%. The option of extraction in multiple contacts did not improve the recovery in relation to a single contact. The solvent can be regenerated by distillation with a loss around 10%. The contaminants are not evaporated and they remain in the non-volatile phase. The global results show that the ex situ solvent extraction is technically a feasible option for the remediation of semi-volatile aromatic, polyaromatic and linear hydrocarbons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In search to increase the offer of liquid, clean, renewable and sustainable energy in the world energy matrix, the use of lignocellulosic materials (LCMs) for bioethanol production arises as a valuable alternative. The objective of this work was to analyze and compare the performance of Saccharomyces cerevisiae, Pichia stipitis and Zymomonas mobilis in the production of bioethanol from coconut fibre mature (CFM) using different strategies: simultaneous saccharification and fermentation (SSF) and semi-simultaneous saccharification and fermentation (SSSF). The CFM was pretreated by hydrothermal pretreatment catalyzed with sodium hydroxide (HPCSH). The pretreated CFM was characterized by X-ray diffractometry and SEM, and the lignin recovered in the liquid phase by FTIR and TGA. After the HPCSH pretreatment (2.5% (v/v) sodium hydroxide at 180 °C for 30 min), the cellulose content was 56.44%, while the hemicellulose and lignin were reduced 69.04% and 89.13%, respectively. Following pretreatment, the obtained cellulosic fraction was submitted to SSF and SSSF. Pichia stipitis allowed for the highest ethanol yield 90.18% in SSSF, 91.17% and 91.03% were obtained with Saccharomyces cerevisiae and Zymomonas mobilis, respectively. It may be concluded that the selection of the most efficient microorganism for the obtention of high bioethanol production yields from cellulose pretreated by HPCSH depends on the operational strategy used and this pretreatment is an interesting alternative for add value of coconut fibre mature compounds (lignin, phenolics) being in accordance with the biorefinery concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada a Xerox Research Centre Europe a Grenoble, França,entre juny i desembre del 2006. El projecte tradueïx termes tècnics anglesos a noruec. És asimètric perquè no tenim recursos lingüístics per a la llengua noruega, però solament per a l'anglès. S’ha desenvolupat i posat en pràctica mètodes que comprovaven contigüitat ("local reordering" i permutació selectiva) per a millorar el funcionament d’una eina anterior. Contigüitat és quan una paraula es traduïx en paraules múltiples, aquestes paraules han de ser adjacents en l'oració. A més, s’ha construït una taula de les operacions de recerca per als termes tècnics i s’ha integrat aquesta taula en un programa de demostració.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.