127 resultados para statistical lip modelling
Resumo:
The resistance of mosquitoes to chemical insecticides is threatening vector control programmes worldwide. Cytochrome P450 monooxygenases (CYPs) are known to play a major role in insecticide resistance, allowing resistant insects to metabolize insecticides at a higher rate. Among them, members of the mosquito CYP6Z subfamily, like Aedes aegypti CYP6Z8 and its Anopheles gambiae orthologue CYP6Z2, have been frequently associated with pyrethroid resistance. However, their role in the pyrethroid degradation pathway remains unclear. In the present study, we created a genetically modified yeast strain overexpressing Ae. aegypti cytochrome P450 reductase and CYP6Z8, thereby producing the first mosquito P450-CPR (NADPH-cytochrome P450-reductase) complex in a yeast recombinant system. The results of the present study show that: (i) CYP6Z8 metabolizes PBAlc (3-phenoxybenzoic alcohol) and PBAld (3-phenoxybenzaldehyde), common pyrethroid metabolites produced by carboxylesterases, producing PBA (3-phenoxybenzoic acid); (ii) CYP6Z8 transcription is induced by PBAlc, PBAld and PBA; (iii) An. gambiae CYP6Z2 metabolizes PBAlc and PBAld in the same way; (iv) PBA is the major metabolite produced in vivo and is excreted without further modification; and (v) in silico modelling of substrate-enzyme interactions supports a similar role of other mosquito CYP6Zs in pyrethroid degradation. By playing a pivotal role in the degradation of pyrethroid insecticides, mosquito CYP6Zs thus represent good targets for mosquito-resistance management strategies.
Resumo:
Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.
Resumo:
Familial searching consists of searching for a full profile left at a crime scene in a National DNA Database (NDNAD). In this paper we are interested in the circumstance where no full match is returned, but a partial match is found between a database member's profile and the crime stain. Because close relatives share more of their DNA than unrelated persons, this partial match may indicate that the crime stain was left by a close relative of the person with whom the partial match was found. This approach has successfully solved important crimes in the UK and the USA. In a previous paper, a model, which takes into account substructure and siblings, was used to simulate a NDNAD. In this paper, we have used this model to test the usefulness of familial searching and offer guidelines for pre-assessment of the cases based on the likelihood ratio. Siblings of "persons" present in the simulated Swiss NDNAD were created. These profiles (N=10,000) were used as traces and were then compared to the whole database (N=100,000). The statistical results obtained show that the technique has great potential confirming the findings of previous studies. However, effectiveness of the technique is only one part of the story. Familial searching has juridical and ethical aspects that should not be ignored. In Switzerland for example, there are no specific guidelines to the legality or otherwise of familial searching. This article both presents statistical results, and addresses criminological and civil liberties aspects to take into account risks and benefits of familial searching.
Resumo:
In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool.
Resumo:
Due to their performance enhancing properties, use of anabolic steroids (e.g. testosterone, nandrolone, etc.) is banned in elite sports. Therefore, doping control laboratories accredited by the World Anti-Doping Agency (WADA) screen among others for these prohibited substances in urine. It is particularly challenging to detect misuse with naturally occurring anabolic steroids such as testosterone (T), which is a popular ergogenic agent in sports and society. To screen for misuse with these compounds, drug testing laboratories monitor the urinary concentrations of endogenous steroid metabolites and their ratios, which constitute the steroid profile and compare them with reference ranges to detect unnaturally high values. However, the interpretation of the steroid profile is difficult due to large inter-individual variances, various confounding factors and different endogenous steroids marketed that influence the steroid profile in various ways. A support vector machine (SVM) algorithm was developed to statistically evaluate urinary steroid profiles composed of an extended range of steroid profile metabolites. This model makes the interpretation of the analytical data in the quest for deviating steroid profiles feasible and shows its versatility towards different kinds of misused endogenous steroids. The SVM model outperforms the current biomarkers with respect to detection sensitivity and accuracy, particularly when it is coupled to individual data as stored in the Athlete Biological Passport.
Resumo:
Au cours des deux dernières décennies, la technique d'imagerie arthro-scanner a bénéficié de nombreux progrès technologiques et représente aujourd'hui une excellente alternative à l'imagerie par résonance magnétique (IRM) et / ou arthro-IRM dans l'évaluation des pathologies de la hanche. Cependant, elle reste limitée par l'exposition aux rayonnements ionisants importante. Les techniques de reconstruction itérative (IR) ont récemment été mis en oeuvre avec succès en imagerie ; la littérature montre que l'utilisation ces dernières contribue à réduire la dose d'environ 40 à 55%, comparativement aux protocoles courants utilisant la rétroprojection filtrée (FBP), en scanner de rachis. A notre connaissance, l'utilisation de techniques IR en arthro-scanner de hanche n'a pas été évaluée jusqu'à présent. Le but de notre étude était d'évaluer l'impact de la technique ASIR (GE Healthcare) sur la qualité de l'image objective et subjective en arthro-scanner de hanche, et d'évaluer son potentiel en terme de réduction de dose. Pour cela, trente sept patients examinés par arthro-scanner de hanche ont été randomisés en trois groupes : dose standard (CTDIvol = 38,4 mGy) et deux groupes de dose réduite (CTDIvol = 24,6 ou 15,4 mGy). Les images ont été reconstruites en rétroprojection filtrée (FBP) puis en appliquant différents pourcentages croissants d'ASIR (30, 50, 70 et 90%). Le bruit et le rapport contraste sur bruit (CNR) ont été mesurés. Deux radiologues spécialisés en imagerie musculo-squelettique ont évalué de manière indépendante la qualité de l'image au niveau de plusieurs structures anatomiques en utilisant une échelle de quatre grades. Ils ont également évalué les lésions labrales et du cartilage articulaire. Les résultats révèlent que le bruit augmente (p = 0,0009) et le CNR diminue (p = 0,001) de manière significative lorsque la dose diminue. A l'inverse, le bruit diminue (p = 0,0001) et le contraste sur bruit augmente (p < 0,003) de manière significative lorsque le pourcentage d'ASIR augmente ; on trouve également une augmentation significative des scores de la qualité de l'image pour le labrum, le cartilage, l'os sous-chondral, la qualité de l'image globale (au delà de ASIR 50%), ainsi que le bruit (p < 0,04), et une réduction significative pour l'os trabuculaire et les muscles (p < 0,03). Indépendamment du niveau de dose, il n'y a pas de différence significative pour la détection et la caractérisation des lésions labrales (n=24, p = 1) et des lésions cartilagineuses (n=40, p > 0,89) en fonction du pourcentage d'ASIR. Notre travail a permis de montrer que l'utilisation de plus de 50% d'ASIR permet de reduire de manière significative la dose d'irradiation reçue par le patient lors d'un arthro-scanner de hanche tout en maintenant une qualité d'image diagnostique comparable par rapport à un protocole de dose standard utilisant la rétroprojection filtrée.
Resumo:
One of the world's largest wollastonite deposits was formed at the contact of the northern Hunter Mountain Batholith (California, USA) in Paleozoic sediments. Wollastonite occurs as zones of variable thickness surrounding layers or nodules of quartzite in limestones. A minimum formation temperature of 650 degrees C is estimated from isolated periclase-bearing lenses in that area. Contact metamorphism of siliceous carbonates has produced mineral assemblages that are consistent with heterogeneous, and partly limited infiltration of water-rich fluids, compatible with O-18/O-16 and C-13/C-12 isotopic patterns recorded in carbonates. Oxygen isotope compositions of wollastonites in the study area may also not require infiltration of large quantities of externally-derived fluids that were out of equilibrium with the rocks. 8180 values of wollastonite are high (14.8 parts per thousand to 25.0 parts per thousand; median: 19.7 parts per thousand) and close to those of the host limestone (19.7 parts per thousand to 28 parts per thousand; median: 24.9 parts per thousand) and quartz (18.0 parts per thousand. to 29.1 parts per thousand; median: 22.6 parts per thousand). Isotopic disequilibrium exists at quartz/wollastonite and wollastonite/calcite boundaries. Therefore, classical batch/Rayleigh fractionation models based on reactant and product equilibrium are not applicable to the wollastonite rims. An approach that relies on local instantaneous mass balance for the reactants, based on the wollastonite-forming reaction is suggested as an alternative way to model wollastonite reaction rims. This model reproduces many of the measured delta O-18 values of wollastonite reaction rims of the current study to within +/- 1 parts per thousand, even though the wollastonite compositions vary by almost 10 parts per thousand. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
Background: Excessive exposure to solar Ultra-Violet (UV) light is the main cause of most skin cancers in humans. Factors such as the increase of solar irradiation at ground level (anthropic pollution), the rise in standard of living (vacation in sunny areas), and (mostly) the development of outdoor activities have contributed to increase exposure. Thus, unsurprisingly, incidence of skin cancers has increased over the last decades more than that of any other cancer. Melanoma is the most lethal cutaneous cancer, while cutaneous carcinomas are the most common cancer type worldwide. UV exposure depends on environmental as well as individual factors related to activity. The influence of individual factors on exposure among building workers was investigated in a previous study. Posture and orientation were found to account for at least 38% of the total variance of relative individual exposure. A high variance of short-term exposure was observed between different body locations, indicating the occurrence of intense, subacute exposures. It was also found that effective short-term exposure ranged between 0 and 200% of ambient irradiation, suggesting that ambient irradiation is a poor predictor of effective exposure. Various dosimetric techniques enable to assess individual effective exposure, but dosimetric measurements remain tedious and tend to be situation-specific. As a matter of facts, individual factors (exposure time, body posture and orientation in the sun) often limit the extrapolation of exposure results to similar activities conducted in other conditions. Objective: The research presented in this paper aims at developing and validating a predictive tool of effective individual exposure to solar UV. Methods: Existing computer graphic techniques (3D rendering) were adapted to reflect solar exposure conditions and calculate short-term anatomical doses. A numerical model, represented as a 3D triangular mesh, is used to represent the exposed body. The amount of solar energy received by each "triangle is calculated, taking into account irradiation intensity, incidence angle and possible shadowing from other body parts. The model take into account the three components of the solar irradiation (direct, diffuse and albedo) as well as the orientation and posture of the body. Field measurements were carried out using a forensic mannequin at the Payerne MeteoSwiss station. Short-term dosimetric measurements were performed in 7 anatomical locations for 5 body postures. Field results were compared to the model prediction obtained from the numerical model. Results: The best match between prediction and measurements was obtained for upper body parts such as shoulders (Ratio Modelled/Measured; Mean = 1.21, SD = 0.34) and neck (Mean = 0.81, SD = 0.32). Small curved body parts such as forehead (Mean = 6.48, SD = 9.61) exhibited a lower matching. The prediction is less accurate for complex postures such as kneeling (Mean = 4.13, SD = 8.38) compared to standing up (Mean = 0.85, SD = 0.48). The values obtained from the dosimeters and the ones computed from the model are globally consistent. Conclusion: Although further development and validation are required, these results suggest that effective exposure could be predicted for a given activity (work or leisure) in various ambient irradiation conditions. Using a generic modelling approach is of high interest in terms of implementation costs as well as predictive and retrospective capabilities.
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
Aim We investigated the late Quaternary history of two closely related and partly sympatric species of Primula from the south-western European Alps, P. latifolia Lapeyr. and P. marginata Curtis, by combining phylogeographical and palaeodistribution modelling approaches. In particular, we were interested in whether the two approaches were congruent and identified the same glacial refugia. Location South-western European Alps. Methods For the phylogeographical analysis we included 353 individuals from 28 populations of P. marginata and 172 individuals from 15 populations of P. latifolia and used amplified fragment length polymorphisms (AFLPs). For palaeodistribution modelling, species distribution models (SDMs) were based on extant species occurrences and then projected to climate models (CCSM, MIROC) of the Last Glacial Maximum (LGM), approximately 21 ka. Results The locations of the modelled LGM refugia were confirmed by various indices of genetic variation. The refugia of the two species were largely geographically isolated, overlapping only 6% to 11% of the species' total LGM distribution. This overlap decreased when the position of the glacial ice sheet and the differential elevational and edaphic distributions of the two species were considered. Main conclusions The combination of phylogeography and palaeodistribution modelling proved useful in locating putative glacial refugia of two alpine species of Primula. The phylogeographical data allowed us to identify those parts of the modelled LGM refugial area that were likely source areas for recolonization. The use of SDMs predicted LGM refugial areas substantially larger and geographically more divergent than could have been predicted by phylogeographical data alone