978 resultados para Blog datasets
Resumo:
Résumé pour le grand public L'île de Fuerteventura (Canaries) offre l'occasion rare d'observer les racines d'un volcan océanique édifié il y a 25 à 30 millions d'années et complètement érodé. On y voit de nombreux petits plutons de forme et composition variées, témoignant d'autant d'épisodes de l'activité magmatique. L'un de ces plutons, appelé PX1, présente une structure inhabituelle formée d'une alternance de bandes verticales d'épaisseur métrique à hectométrique de roches sombres de composition pyroxénilique ou gabbroïque. Les pyroxénites résultent clairement de l'accumulation de cristaux de pyroxènes et non de la simple solidification d'un magma? Se pose dès lors la question de la nature du processus qui a conduit à l'accumulation verticale de niveaux concentrés en pyroxènes. En effet, les litages pyroxénitiques classiques sont subhorizontaux, car ils résultent de l'accumulation gravitaire des cristaux séparés du magma dont ils cristalli¬sent par sédimentation. Cette étude vise à identifier et comprendre les mécanismes qui ont engendré ce Iitage minéralogique vertical et l'im¬portant volume de ces faciès cumulatifs. Nous nous sommes également intéressés aux conditions de pression et de température régnant au moment de la mise en place du pluton, ainsi qu'à sa durée de vie et à sa vitesse de refroidis¬sement. Enfin une approche géochimique nous a permis de préciser la nature de la source mantellique des magmas liés à cette activité magmatique. PX1 est en réalité un complexe filonien formé à des conditions de pression et de température de 1-2 kbar et 1050- 1100°C; sa construction a nécessité au moins 150 km3 de magma. L'alternance d'horizons gabbroïques et pyroxéniti¬ques représente des injections successives de magma sous la forme de filons verticaux, mis en place dans un contexte régional en extension. L'étude des orientations des minéraux dans ces faciès révèle que les horizons gabbroïques enregistrent l'extension régionale, alors que les pyroxénites sont générées par une compaction au sein du pluton. Ceci suggère que le régime des contraintes, qui était extensif lors de l'initiation de la mise en place de PX1, est pério¬diquement devenu compressif au sein même du pluton. Cette compression serait liée à des cycles de mise en place où la vitesse de croissance du pluton dépassait celle de l'extension régionale. La différenciation observée au sein de chaque horizon, depuis des pyroxénites riches en olivine jusqu'à des pyroxé¬nites à plagioclase interstitiel et des gabbros, ainsi que la composition géochimique des minéraux qui les constituent suggèrent que chaque filon vertical s'est mis en place à partir d'un magma de composition identique, puis a évolué indépendamment des autres en fonction du régime thermique et du régime des contraintes local. Lorsque le magma en train de cristalliser s'est trouvé en compression, le liquide résiduel a été séparé des cristaux déjà formés et extrait du système, laissant derrière lui une accumulation de cristaux dont la nature et les proportions dépendaient du stade de cristallisation atteint par le magma au moment de l'extraction. Ainsi, les niveaux de pyroxénites à olivine (premier minéral à cristalliser) ont été formés lorsque le magma correspondant était encore peu cristallisé; à l'inverse, les py¬roxénites riches en plagioclase (minéral plus tardif dans la séquence de cristallisation) et certains gabbros à caractère cumulatif résultent d'une compression tardive dans le processus de cristallisation du filon concerné. Les liquides résiduels extraits des niveaux pyroxénitiques sont rarement observés dans PX1, certaines poches et filonets de com¬position anorthositique pourraient en être les témoins. L'essentiel de ces liquides a probablement gagné des niveaux supérieurs du pluton, voire la surface du volcan. L'origine du régime compressif périodique affectant les filons en voie de cristallisation est attribuée aux injections suivantes de magma au sein du pluton, qui se sont succédées à un rythme plus rapide que la vitesse de consolidation des filons. Des datations U/Pb de haute précision sur des cristaux de zircon et de baddeleyite ainsi que40Ar/39Ar sur des cris¬taux d'amphibole révèlent une initiation de la mise en place de PX1 il y a 22.1 ± 0,7 Ma; celle-ci a duré quelque 0,48 ± 0,22 à 0,52 ± 0,29 Ma. Ce laps de temps est compatible avec celui nécessaire à la cristallisation des filons individuels, qui va de moins d'une année lors de l'initiation du magmatisme à 5 ans lors du maximum d'activité de PX1. La présence de cristaux résorbés enregistrant une cristallisation complexe suggère l'existence d'une chambre mag¬matique convective sous-jacente à PX1 et périodiquement rechargée. Les compositions isotopiques des roches étu¬diées révèlent une source mantellique profonde de type point chaud avec une contribution du manteau lithosphéri- que métasomatisé présent sous les îles Canaries. Résumé L'intrusion mafique Miocène PX1 fait partie du soubassement superficiel (0.15-0.2 GPa, 1100 °Q d'un volcan d'île océanique. La particularité de ce pluton est l'existence d'alternances d'unités de gabbros et de pyroxénites qui met¬tent en évidence un litage magmatique vertical (NNE-SSW). Les horizons gabbroiques et pyroxénitiques sont constitués d'unités de différenciation métriques qui suggèrent tine mise en place par injections périodiques de filons verticaux de magma formant un complexe filonien. Chaque filon vertical a subi une différenciation parallèle à un front de solidification sub-vertical parallèle aux bords du filon. Les pyroxénites résultent du fractionnement et de l'accumulation d'olivine ± clinopyroxene ± plagioclase à partir d'un magma basaltique faiblement alcalin et sont interprétées comme étant des imités de différenciation tronquées dont le liquide interstitiel a été extrait par compaction. L'orientation préférentielle des clinopyroxènes dans ces pyroxe- nites (obtenues par analyse EBSD et micro-tomographique) révèle une composante de cisaillement simple dans la genèse de ces roches, ce qui confirme cette interprétation. La compaction des pyroxénites est probablement causée par a mise en place de filons de magma suivants. Le liquide interstitiel expulsé est probablement par ces derniers. Les clinopyroxènes des gabbros, montrent une composante de cisaillement pure suggérant qu'ils sont affectés par une déformation syn-magmatique parallèle aux zones de cisaillement NNE-SSW observées autour de PX1 et liées au contexte tectonique Miocène d'extension régionale. Ceci suggère que les gabbros sont liés à des taux de mise en place faibles à la fin de cycles d'activité magmatique et sont peu ou pas affectés par la compaction. L'initiation et la géométrie de PX1 sont donc contrôlées par le contexte tectonique régional d'extension alors que les taux et les volumes de magma dépendent de facteurs liés à la source. Des taux d'injection élevés résultent probable¬ment en une croissance du pluton supérieure à la place crée par cette extension. Dans ce cas de figure, la propagation des nouveaux dykes et l'inaptitude du magma à circuler à travers les anciens dykes cristallisés pourrait causer une augmentation de la pression non-lithostatique sur ces derniers, exprimée par un cisaillement simple et l'expulsion du liquide interstitiel qu'ils contiennent (documenté par les zones de collecte anorthositiques). Les compositions en éléments majeurs et traces des gabbros et pyroxenites de PX1 sont globalement homogènes et dépendent de la nature cumulative des échantillons. Cependant, de petites variations des concentrations en éléments traces ainsi que les teneurs en éléments traces des bordures de clinopyroxenes suggèrent que ces derniers ont subi un processus de rééquilibrage et de cristallisation in situ. L'homogénéité des compositions chimiques des échantillons, ainsi que la présence de grains de clinopyroxene résorbés suggère que le complexe filonien PX1 s'est mis en place au dessus d'une chambre magmatique périodiquement rechargée dans laquelle la convection est efficace. Chaque filon est donc issu d'un même magma, mais a subi une différenciation par cristallisation in situ (jusqu'à 70% de fraction¬nement) indépendamment des autres. Dans ces filons cristallisés, les minéraux cumulatifs subissent un rééquilibrage partiel avec les liquide interstitiel avant que ce dernier ne soit expulsé lors de la compaction (mettant ainsi un terme à la différenciation). Ce modèle de mise en place signifie qu'un minimum de 150Km3 de magma est nécessaire à la genèse de PX1, une partie de ce volume ayant été émis par le 'Central Volcanic Complex' de Fuerteventura. Les rapports isotopiques radiogéniques mesurés révèlent la contribution de trois pôles mantelliques dans la genèse du magma formant PX1. Le mélange de ces pôles HIMU, DMM et EM1 refléterai l'interaction du point chaud Cana¬rien avec un manteau lithosphérique hétérogène métasomatisé. Les petites variations de ces rapports et des teneurs en éléments traces au sein des faciès pourrait refléter des taux de fusion partielle variable de la source, résultant en un échantillonnage variable du manteau lithosphérique métasomatisé lors de son interaction avec le point chaud. Des datations U/Pb de haute précision (TIMS) sur des cristaux de zircon et de baddeleyite extraits de gabbros de PX1 révèlent que l'initiation de la cristallisation du magma a eu lieu il y a 22.10±0.07 Ma et que l'activité magmatique a duré un minimum de 0.48 à 0.52 Ma. Des âges 40Ar/39Ar obtenus sur amphibole sont de 21.9 ± 0.6 à 21.8 ± 0.3 Ma, identiques aux âges U/Pb. La combinaison de ces méthodes de datations, suggère que le temps maximum nécessaire à PX1 pour se refroidir en dessous de la température de fermeture de l'amphibole est de 0.8Ma. Ceci signifie que la durée de vie de PX1 est de 520 000 à 800 000 ans. La coexistence de cristaux de baddeleyite et de zircon dans un gabbro est attribuée à son interaction avec un fluide riche en C02 relâché par les carbonatites encaissantes lors du métamorphisme de contact généré par la mise en place de PX1 environ 160 000 ans après le début de sa mise en place. Les durées de vie obtenue sont en accord avec le modèle de mise en place suggérant une durée de cristallisation poux chaque filon allant de 1 an à 5 ans. Abstract The Miocene PX1 gabbro-pyroxenite intrusion (Fuerteventura, Canary Islands), is interpreted as the shallow-level feeder-zone (0.15-0.2 GPa and 1100-1120°C), to an ocean island volcano. The particularity of PX1 is that it displays a NNE-SSW trending vertical magmatic banding expressed by alternating gabbro and pyroxeriite sequences. The gabbro and pyroxenite sequences consist of metre-thick differentiation units, which suggest emplacement by pe¬riodic injection of magma pulses as vertical dykes that amalgamated, similarly to a sub-volcanic sheeted dyke com¬plex. Individual dykes underwent internal differentiation following a solidification front (favoured by a significant lateral/horizontal thermal gradient) parallel to the dyke edges. Pyroxenitic layers result from the fractionation and accumulation of clinopyroxene ± olivine ± plagioclase crystals from a mildly alkaline basaltic liquid and are interpre¬ted as truncated differentiation sequences, from which residual melts were extracted by compaction. Clinopyroxene mineral orientation in pyroxenites (evidenced by EBSD and micro X-ray tomography analysis) display a marked pure shear component, supporting this interpretation. Compaction and squeezing of the crystal mush is ascribed to the incoming and inflating magma pulses. The resulting expelled interstitial liquid was likely collected and erupted along with the magma flowing through the newly injected dykes. Gabbro sequences represent crystallised coalesced magma batches, emplaced at lower rates at the end of eruptive cycles, and underwent minor melt extraction as evi¬denced by clinopyroxene orientations that record a simple shear component suggesting syn-magmatic deformation parallel to observed NNF.-SSW trending shear-zones induced by the regional tensional Miocene stress-field. The initiation and geometry of PX1 is controlled by the regional extensional tectonic regime whereas rates and vo¬lumes of magma depend on source-related factors. High injection rates are likely to induce intrusion growth rates larger than could be accommodated by the regional extension. In this case, dyke tip geometry and the inability of magma to circulate through previously emplaced and crystallised dykes could result in an increase of non-lithostatic pressure on previously emplaced mushy dyke walls; generating strong pure-shear compaction and interstitial melt expulsion within the feeder-zone as recorded by the cumulitic pyroxenite bands and anorthositic collection zones. The whole-rock major and trace-element chemistry of PX1 gabbros and pyroxenites is globally homogeneous and controlled by the cumulate nature of the samples (i.e. on the modal proportions of olivine, pyroxene, plagioclase and oxides). However, small variations of whole-rock trace-element contents as well as trace-element contents of clinopyroxene rims suggest that in-situ re-equilibration and crystallisation has occurred. Additionally, the global homogeneity and presence of complex zoning of rare resorbed clinopyroxene crystals suggest that the PX1 feeder- zone overlies a periodically replenished and efficiently mixed magma chamber. Each individual dyke of magma thus originated from a compositionally constant mildly alkaline magma and differentiated independently from the others reaching up to 70% fractionation. Following dyke arrest these are affected by interaction with the trapped interstitial liquid prior to its compaction-linked expulsion (thus stopping the differentiation process). This emplacement model implies that minimum amount of approximately 150 km3 of magma is needed to generate PX1, part of it having been erupted through the overlying Central Volcanic Complex of Fuerteventura. The radiogenic isotope ratios of PX1 samples reveal the contribution on three end-members during magma genesis. This mixing of the H1MU, EMI and DMM end-members could reflect the interaction of the deep-seated Canarian mantle plume with a heterogeneous metasomatic and sepentininsed lithospheric mantle. Additionally, the observed trace-element and isotopic variations within the same fades groups could reflect varying degrees of partial melting of the source region, thus tapping more or less large areas of the metasomatised lithospheric mantle during interac¬tion with the plume. High precision ID-TIMS U/Pb zircon and baddeleyite ages from the PX1 gabbro samples, indicate initiation of magma crystallisation at 22.10 ± 0.07 Ma. The magmatic activity lasted a minimum of 0.48 to 0.52 Ma. 40Ar/39Ar amphibole ages are of 21.9 ± 0.6 to 21.8 ± 0.3, identical within errors to the U/Pb ages. The combination of the 40Ar/39Ar and U/Pb datasets imply that the maximum amount of time PX1 took to cool below amphibole Tc is 0.8 Ma, suggesting PX1 lifetime of 520 000 to 800 000 years. On top of this, the coexistence of baddeleyite and zircon in a single sample is ascribed to the interaction of PX1 with C02-rich carbonatite-derived fluids released from the host-rock carbonatites during contact metamorphism 160 000 years after PX1 initiation. These ages are in agreement with the emplacement model, implying a crystallisation time of less than 1 to 5 years for individual dykes.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc.) of individual datasets. The proposed method uses completed local binary pattern (CLBP), grey level co-occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN), neural network (NN), support vector machine (SVM) or probability density weighted mean distance (PDWMD) is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
El objetivo final de este trabajo es estudiar las metodologías, técnicas y herramientas que faciliten el proceso de desarrollo de entornos web interactivos dentro de lo que se conoce como paradigma de la web 2.0. Inicialmente se analiza el estado del arte en cuanto a la accesibilidad de entornos digitales y se presentan las características y problemas específicos de accesibilidad de los sistemas de gestión de contenido (o entornos CMS). Posteriormente se describe una metodología para la ingeniería de la accesibilidad en entornos web 2.0. También se presenta un framework con la intención de sustentar la metodología propuesta con el mayor grado de automatización y disminución del esfuerzo requerido por parte de las personas encargadas de gestionar la evaluación de la accesibilidad. Finalmente, se presentan una serie de prototipos que estudian el resultado que ha de ofrecer el framework respecto a diversos casos de estudio referentes a entornos web 2.0 administrados mediante gestores de contenido, concretamente un entorno genérico, una wiki y un blog.
Resumo:
Background: The G1-to-S transition of the cell cycle in the yeast Saccharomyces cerevisiae involves an extensive transcriptional program driven by transcription factors SBF (Swi4-Swi6) and MBF (Mbp1-Swi6). Activation of these factors ultimately depends on the G1 cyclin Cln3. Results: To determine the transcriptional targets of Cln3 and their dependence on SBF or MBF, we first have used DNA microarrays to interrogate gene expression upon Cln3 overexpression in synchronized cultures of strains lacking components of SBF and/or MBF. Secondly, we have integrated this expression dataset together with other heterogeneous data sources into a single probabilistic model based on Bayesian statistics. Our analysis has produced more than 200 transcription factor-target assignments, validated by ChIP assays and by functional enrichment. Our predictions show higher internal coherence and predictive power than previous classifications. Our results support a model whereby SBF and MBF may be differentially activated by Cln3. Conclusions: Integration of heterogeneous genome-wide datasets is key to building accurate transcriptional networks. By such integration, we provide here a reliable transcriptional network at the G1-to-S transition in the budding yeast cell cycle. Our results suggest that to improve the reliability of predictions we need to feed our models with more informative experimental data.
Resumo:
La fabrication, la distribution et l'usage de fausses pièces d'identité constituent une menace pour la sécurité autant publique que privée. Ces faux documents représentent en effet un catalyseur pour une multitude de formes de criminalité, des plus anodines aux formes les plus graves et organisées. La dimension, la complexité, la faible visibilité, ainsi que les caractères répétitif et évolutif de la fraude aux documents d'identité appellent des réponses nouvelles qui vont au-delà d'une approche traditionnelle au cas par cas ou de la stratégie du tout technologique dont la perspective historique révèle l'échec. Ces nouvelles réponses passent par un renforcement de la capacité de comprendre les problèmes criminels que posent la fraude aux documents d'identité et les phénomènes qui l'animent. Cette compréhension est tout bonnement nécessaire pour permettre d'imaginer, d'évaluer et de décider les solutions et mesures les plus appropriées. Elle requière de développer les capacités d'analyse et la fonction de renseignement criminel qui fondent en particulier les modèles d'action de sécurité les plus récents, tels que l'intelligence-led policing ou le problem-oriented policing par exemple. Dans ce contexte, le travail doctoral adopte une position originale en postulant que les fausses pièces d'identité se conçoivent utilement comme la trace matérielle ou le vestige résultant de l'activité de fabrication ou d'altération d'un document d'identité menée par les faussaires. Sur la base de ce postulat fondamental, il est avancé que l'exploitation scientifique, méthodique et systématique de ces traces au travers d'un processus de renseignement forensique permet de générer des connaissances phénoménologiques sur les formes de criminalité qui fabriquent, diffusent ou utilisent les fausses pièces d'identité, connaissances qui s'intègrent et se mettent avantageusement au service du renseignement criminel. A l'appui de l'épreuve de cette thèse de départ et de l'étude plus générale du renseignement forensique, le travail doctoral propose des définitions et des modèles. Il décrit des nouvelles méthodes de profilage et initie la constitution d'un catalogue de formes d'analyses. Il recourt également à des expérimentations et des études de cas. Les résultats obtenus démontrent que le traitement systématique de la donnée forensique apporte une contribution utile et pertinente pour le renseignement criminel stratégique, opérationnel et tactique, ou encore la criminologie. Combiné aux informations disponibles par ailleurs, le renseignement forensique produit est susceptible de soutenir l'action de sécurité dans ses dimensions répressive, proactive, préventive et de contrôle. En particulier, les méthodes de profilage des fausses pièces d'identité proposées permettent de révéler des tendances au travers de jeux de données étendus, d'analyser des modus operandi ou d'inférer une communauté ou différence de source. Ces méthodes appuient des moyens de détection et de suivi des séries, des problèmes et des phénomènes criminels qui s'intègrent dans le cadre de la veille opérationnelle. Ils permettent de regrouper par problèmes les cas isolés, de mettre en évidence les formes organisées de criminalité qui méritent le plus d'attention, ou de produire des connaissances robustes et inédites qui offrent une perception plus profonde de la criminalité. Le travail discute également les difficultés associées à la gestion de données et d'informations propres à différents niveaux de généralité, ou les difficultés relatives à l'implémentation du processus de renseignement forensique dans la pratique. Ce travail doctoral porte en premier lieu sur les fausses pièces d'identité et leur traitement par les protagonistes de l'action de sécurité. Au travers d'une démarche inductive, il procède également à une généralisation qui souligne que les observations ci-dessus ne valent pas uniquement pour le traitement systématique des fausses pièces d'identité, mais pour celui de tout type de trace dès lors qu'un profil en est extrait. Il ressort de ces travaux une définition et une compréhension plus transversales de la notion et de la fonction de renseignement forensique. The production, distribution and use of false identity documents constitute a threat to both public and private security. Fraudulent documents are a catalyser for a multitude of crimes, from the most trivial to the most serious and organised forms. The dimension, complexity, low visibility as well as the repetitive and evolving character of the production and use of false identity documents call for new solutions that go beyond the traditional case-by-case approach, or the technology-focused strategy whose failure is revealed by the historic perspective. These new solutions require to strengthen the ability to understand crime phenomena and crime problems posed by false identity documents. Such an understanding is pivotal in order to be able to imagine, evaluate and decide on the most appropriate measures and responses. Therefore, analysis capacities and crime intelligence functions, which found the most recent policing models such as intelligence-led policing or problem-oriented policing for instance, have to be developed. In this context, the doctoral research work adopts an original position by postulating that false identity documents can be usefully perceived as the material remnant resulting from the criminal activity undertook by forgers, namely the manufacture or the modification of identity documents. Based on this fundamental postulate, it is proposed that a scientific, methodical and systematic processing of these traces through a forensic intelligence approach can generate phenomenological knowledge on the forms of crime that produce, distribute and use false identity documents. Such knowledge should integrate and serve advantageously crime intelligence efforts. In support of this original thesis and of a more general study of forensic intelligence, the doctoral work proposes definitions and models. It describes new profiling methods and initiates the construction of a catalogue of analysis forms. It also leverages experimentations and case studies. Results demonstrate that the systematic processing of forensic data usefully and relevantly contributes to strategic, tactical and operational crime intelligence, and also to criminology. Combined with alternative information available, forensic intelligence may support policing in its repressive, proactive, preventive and control activities. In particular, the proposed profiling methods enable to reveal trends among extended datasets, to analyse modus operandi, or to infer that false identity documents have a common or different source. These methods support the detection and follow-up of crime series, crime problems and phenomena and therefore contribute to crime monitoring efforts. They enable to link and regroup by problems cases that were previously viewed as isolated, to highlight organised forms of crime which deserve greatest attention, and to elicit robust and novel knowledge offering a deeper perception of crime. The doctoral research work discusses also difficulties associated with the management of data and information relating to different levels of generality, or difficulties associated with the implementation in practice of the forensic intelligence process. The doctoral work focuses primarily on false identity documents and their treatment by policing stakeholders. However, through an inductive process, it makes a generalisation which underlines that observations do not only apply to false identity documents but to any kind of trace as soon as a profile is extracted. A more transversal definition and understanding of the concept and function of forensic intelligence therefore derives from the doctoral work.
Resumo:
BACKGROUND: The aim of the current study was to assess whether widely used nutritional parameters are correlated with the nutritional risk score (NRS-2002) to identify postoperative morbidity and to evaluate the role of nutritionists in nutritional assessment. METHODS: A randomized trial on preoperative nutritional interventions (NCT00512213) provided the study cohort of 152 patients at nutritional risk (NRS-2002 ≥3) with a comprehensive phenotyping including diverse nutritional parameters (n=17), elaborated by nutritional specialists, and potential demographic and surgical (n=5) confounders. Risk factors for overall, severe (Dindo-Clavien 3-5) and infectious complications were identified by univariate analysis; parameters with P<0.20 were then entered in a multiple logistic regression model. RESULTS: Final analysis included 140 patients with complete datasets. Of these, 61 patients (43.6%) were overweight, and 72 patients (51.4%) experienced at least one complication of any degree of severity. Univariate analysis identified a correlation between few (≤3) active co-morbidities (OR=4.94; 95% CI: 1.47-16.56, p=0.01) and overall complications. Patients screened as being malnourished by nutritional specialists presented less overall complications compared to the not malnourished (OR=0.47; 95% CI: 0.22-0.97, p=0.043). Severe postoperative complications occurred more often in patients with low lean body mass (OR=1.06; 95% CI: 1-1.12, p=0.028). Few (≤3) active co-morbidities (OR=8.8; 95% CI: 1.12-68.99, p=0.008) were related with postoperative infections. Patients screened as being malnourished by nutritional specialists presented less infectious complications (OR=0.28; 95% CI: 0.1-0.78), p=0.014) as compared to the not malnourished. Multivariate analysis identified few co-morbidities (OR=6.33; 95% CI: 1.75-22.84, p=0.005), low weight loss (OR=1.08; 95% CI: 1.02-1.14, p=0.006) and low hemoglobin concentration (OR=2.84; 95% CI: 1.22-6.59, p=0.021) as independent risk factors for overall postoperative complications. Compliance with nutritional supplements (OR=0.37; 95% CI: 0.14-0.97, p=0.041) and supplementation of malnourished patients as assessed by nutritional specialists (OR=0.24; 95% CI: 0.08-0.69, p=0.009) were independently associated with decreased infectious complications. CONCLUSIONS: Nutritional support based upon NRS-2002 screening might result in overnutrition, with potentially deleterious clinical consequences. We emphasize the importance of detailed assessment of the nutritional status by a dedicated specialist before deciding on early nutritional intervention for patients with an initial NRS-2002 score of ≥3.
Resumo:
BACKGROUND: In acute respiratory failure, arterial blood gas analysis (ABG) is used to diagnose hypercapnia. Once non-invasive ventilation (NIV) is initiated, ABG should at least be repeated within 1 h to assess PaCO2 response to treatment in order to help detect NIV failure. The main aim of this study was to assess whether measuring end-tidal CO2 (EtCO2) with a dedicated naso-buccal sensor during NIV could predict PaCO2 variation and/or PaCO2 absolute values. The additional aim was to assess whether active or passive prolonged expiratory maneuvers could improve the agreement between expiratory CO2 and PaCO2. METHODS: This is a prospective study in adult patients suffering from acute hypercapnic respiratory failure (PaCO2 ≥ 45 mmHg) treated with NIV. EtCO2 and expiratory CO2 values during active and passive expiratory maneuvers were measured using a dedicated naso-buccal sensor and compared to concomitant PaCO2 values. The agreement between two consecutive values of EtCO2 (delta EtCO2) and two consecutive values of PaCO2 (delta PaCO2) and between PaCO2 and concomitant expiratory CO2 values was assessed using the Bland and Altman method adjusted for the effects of repeated measurements. RESULTS: Fifty-four datasets from a population of 11 patients (8 COPD and 3 non-COPD patients), were included in the analysis. PaCO2 values ranged from 39 to 80 mmHg, and EtCO2 from 12 to 68 mmHg. In the observed agreement between delta EtCO2 and deltaPaCO2, bias was -0.3 mmHg, and limits of agreement were -17.8 and 17.2 mmHg. In agreement between PaCO2 and EtCO2, bias was 14.7 mmHg, and limits of agreement were -6.6 and 36.1 mmHg. Adding active and passive expiration maneuvers did not improve PaCO2 prediction. CONCLUSIONS: During NIV delivered for acute hypercapnic respiratory failure, measuring EtCO2 using a dedicating naso-buccal sensor was inaccurate to predict both PaCO2 and PaCO2 variations over time. Active and passive expiration maneuvers did not improve PaCO2 prediction. TRIAL REGISTRATION: ClinicalTrials.gov: NCT01489150.
Resumo:
Alternative splicing produces multiple isoforms from the same gene, thus increasing the number of transcripts of the species. Alternative splicing is a virtually ubiquitous mechanism in eukaryotes, for example more than 90% of protein-coding genes in human are alternatively spliced. Recent evolutionary studies showed that alternative splicing is a fast evolving and highly species- specific mechanism. The rapid evolution of alternative splicing was considered as a contribution to the phenotypic diversity between species. However, the function of many isoforms produced by alternative splicing remains unclear and they might be the result of noisy splicing. Thus, the functional relevance of alternative splicing and the evolutionary mechanisms of its rapid divergence among species are still poorly understood. During my thesis, I performed a large-scale analysis of the regulatory mechanisms that drive the rapid evolution of alternative splicing. To study the evolution of alternative splicing regulatory mechanisms, I used an extensive RNA-sequencing dataset comprising 12 tetrapod species (human, chimpanzee and bonobo, gorilla, orangutan, macaque, marmoset, mouse, opossum, platypus, chicken and frog) and 8 tissues (cerebellum, brain, heart, kidney, liver, testis, placenta and ovary). To identify the catalogue of alternative splicing eis-acting regulatory elements in the different tetrapod species, I used a previously defined computational approach. This approach is a statistical analysis of exons/introns and splice sites composition and relies on a principle of compensation between splice sites strength and the presence of additional regulators. With an evolutionary comparative analysis of the exonic eis-acting regulators, I showed that these regulatory elements are generally shared among primates and more conserved than non-regulatory elements. In addition, I showed that the usage of these regulatory elements is also more conserved than expected by chance. In addition to the identification of species- specific eis-acting regulators, these results may explain the rapid evolution of alternative splicing. I also developed a new approach based on evolutionary sequence changes and corresponding alternative splicing changes to identify potential splicing eis-acting regulators in primates. The identification of lineage-specific substitutions and corresponding lineage-specific alternative splicing changes, allowed me to annotate the genomic sequences that might have played a role in the alternative splicing pattern differences among primates. Finally, I showed that the identified splicing eis-acting regulator datasets are enriched in human disease-causing mutations, thus confirming their biological relevance.
Resumo:
Current standard treatments for metastatic colorectal cancer (CRC) are based on combination regimens with one of the two chemotherapeutic drugs, irinotecan or oxaliplatin. However, drug resistance frequently limits the clinical efficacy of these therapies. In order to gain new insights into mechanisms associated with chemoresistance, and departing from three distinct CRC cell models, we generated a panel of human colorectal cancer cell lines with acquired resistance to either oxaliplatin or irinotecan. We characterized the resistant cell line variants with regards to their drug resistance profile and transcriptome, and matched our results with datasets generated from relevant clinical material to derive putative resistance biomarkers. We found that the chemoresistant cell line variants had distinctive irinotecan- or oxaliplatin-specific resistance profiles, with non-reciprocal cross-resistance. Furthermore, we could identify several new, as well as some previously described, drug resistance-associated genes for each resistant cell line variant. Each chemoresistant cell line variant acquired a unique set of changes that may represent distinct functional subtypes of chemotherapy resistance. In addition, and given the potential implications for selection of subsequent treatment, we also performed an exploratory analysis, in relevant patient cohorts, of the predictive value of each of the specific genes identified in our cellular models.
Resumo:
SEPServer is a three-year collaborative project funded by the seventh framework programme (FP7-SPACE) of the European Union. The objective of the project is to provide access to state-of-the-art observations and analysis tools for the scientific community on solar energetic particle (SEP) events and related electromagnetic (EM) emissions. The project will eventually lead to better understanding of the particle acceleration and transport processes at the Sun and in the inner heliosphere. These processes lead to SEP events that form one of the key elements of space weather. In this paper we present the first results from the systematic analysis work performed on the following datasets: SOHO/ERNE, SOHO/EPHIN, ACE/EPAM, Wind/WAVES and GOES X-rays. A catalogue of SEP events at 1 AU, with complete coverage over solar cycle 23, based on high-energy (~68-MeV) protons from SOHO/ERNE and electron recordings of the events by SOHO/EPHIN and ACE/EPAM are presented. A total of 115 energetic particle events have been identified and analysed using velocity dispersion analysis (VDA) for protons and time-shifting analysis (TSA) for electrons and protons in order to infer the SEP release times at the Sun. EM observations during the times of the SEP event onset have been gathered and compared to the release time estimates of particles. Data from those events that occurred during the European day-time, i.e., those that also have observations from ground-based observatories included in SEPServer, are listed and a preliminary analysis of their associations is presented. We find that VDA results for protons can be a useful tool for the analysis of proton release times, but if the derived proton path length is out of a range of 1 AU < s[3 AU, the result of the analysis may be compromised, as indicated by the anti-correlation of the derived path length and release time delay from the asso ciated X-ray flare. The average path length derived from VDA is about 1.9 times the nominal length of the spiral magnetic field line. This implies that the path length of first-arriving MeV to deka-MeV protons is affected by interplanetary scattering. TSA of near-relativistic electrons results in a release time that shows significant scatter with respect to the EM emissions but with a trend of being delayed more with increasing distance between the flare and the nominal footpoint of the Earth-connected field line.
Resumo:
INTRODUCTION: Perfusion-CT (PCT) processing involves deconvolution, a mathematical operation that computes the perfusion parameters from the PCT time density curves and an arterial curve. Delay-sensitive deconvolution does not correct for arrival delay of contrast, whereas delay-insensitive deconvolution does. The goal of this study was to compare delay-sensitive and delay-insensitive deconvolution PCT in terms of delineation of the ischemic core and penumbra. METHODS: We retrospectively identified 100 patients with acute ischemic stroke who underwent admission PCT and CT angiography (CTA), a follow-up vascular study to determine recanalization status, and a follow-up noncontrast head CT (NCT) or MRI to calculate final infarct volume. PCT datasets were processed twice, once using delay-sensitive deconvolution and once using delay-insensitive deconvolution. Regions of interest (ROIs) were drawn, and cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) in these ROIs were recorded and compared. Volume and geographic distribution of ischemic core and penumbra using both deconvolution methods were also recorded and compared. RESULTS: MTT and CBF values are affected by the deconvolution method used (p < 0.05), while CBV values remain unchanged. Optimal thresholds to delineate ischemic core and penumbra are different for delay-sensitive (145 % MTT, CBV 2 ml × 100 g(-1) × min(-1)) and delay-insensitive deconvolution (135 % MTT, CBV 2 ml × 100 g(-1) × min(-1) for delay-insensitive deconvolution). When applying these different thresholds, however, the predicted ischemic core (p = 0.366) and penumbra (p = 0.405) were similar with both methods. CONCLUSION: Both delay-sensitive and delay-insensitive deconvolution methods are appropriate for PCT processing in acute ischemic stroke patients. The predicted ischemic core and penumbra are similar with both methods when using different sets of thresholds, specific for each deconvolution method.
Resumo:
Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.