66 resultados para instersection computation
em Université de Lausanne, Switzerland
Resumo:
Path integration is known to provide information to keep track of spatial location. Surprisingly, few investigations concerning sex differences in computation of the traveling distance have been done. This work was aimed at analyzing the reproduction of both passive and active linear displacements in women and men. To this end, the displacement of blindfolded subjects was done in a wheelchair, then on foot, three times in each condition for a fixed distance. Copies of passive and active traveling distance, distance estimations and pointing responses towards the starting point were analyzed. In passive condition and comparatively to men, women error was larger. Whereas traveling distance was generally underestimated in women, it was overestimated in men. In active condition, no sex differences were observed. When blindfolded subjects have to estimate the traveling distance, the female error was larger than the male one. But, when subjects were asked to indicate the visual cue corresponding to the traveling distance, the male error was larger than the female one. Finally, pointing to the starting point (0°) after a whole-body rotation showed a larger deviation from 0° in men than in women. These results suggest that sex of the subjects influence brain computation of path integration information.
Resumo:
We have devised a program that allows computation of the power of F-test, and hence determination of appropriate sample and subsample sizes, in the context of the one-way hierarchical analysis of variance with fixed effects. The power at a fixed alternative is an increasing function of the sample size and of the subsample size. The program makes it easy to obtain the power of F-test for a range of values of sample and subsample sizes, and therefore the appropriate sizes based on a desired power. The program can be used for the 'ordinary' case of the one-way analysis of variance, as well as for hierarchical analysis of variance with two stages of sampling. Examples are given of the practical use of the program.
Resumo:
Extensive gene flow between wheat (Triticum sp.) and several wild relatives of the genus Aegilops has recently been detected despite notoriously high levels of selfing in these species. Here, we assess and model the spread of wheat alleles into natural populations of the barbed goatgrass (Aegilops triuncialis), a wild wheat relative prevailing in the Mediterranean flora. Our sampling, based on an extensive survey of 31 Ae. triuncialis populations collected along a 60 km × 20 km area in southern Spain (Grazalema Mountain chain, Andalousia, totalling 458 specimens), is completed with 33 wheat cultivars representative of the European domesticated pool. All specimens were genotyped with amplified fragment length polymorphism with the aim of estimating wheat admixture levels in Ae. triuncialis populations. This survey first confirmed extensive hybridization and backcrossing of wheat into the wild species. We then used explicit modelling of populations and approximate Bayesian computation to estimate the selfing rate of Ae. triuncialis along with the magnitude, the tempo and the geographical distance over which wheat alleles introgress into Ae. triuncialis populations. These simulations confirmed that extensive introgression of wheat alleles (2.7 × 10(-4) wheat immigrants for each Ae. triuncialis resident, at each generation) into Ae. triuncialis occurs despite a high selfing rate (Fis ≈ 1 and selfing rate = 97%). These results are discussed in the light of risks associated with the release of genetically modified wheat cultivars in Mediterranean agrosystems.
Resumo:
Drug metabolism can produce metabolites with physicochemical and pharmacological properties that differ substantially from those of the parent drug, and consequently has important implications for both drug safety and efficacy. To reduce the risk of costly clinical-stage attrition due to the metabolic characteristics of drug candidates, there is a need for efficient and reliable ways to predict drug metabolism in vitro, in silico and in vivo. In this Perspective, we provide an overview of the state of the art of experimental and computational approaches for investigating drug metabolism. We highlight the scope and limitations of these methods, and indicate strategies to harvest the synergies that result from combining measurement and prediction of drug metabolism.
Resumo:
The perceived low levels of genetic diversity, poor interspecific competitive and defensive ability, and loss of dispersal capacities of insular lineages have driven the view that oceanic islands are evolutionary dead ends. Focusing on the Atlantic bryophyte flora distributed across the archipelagos of the Azores, Madeira, the Canary Islands, Western Europe, and northwestern Africa, we used an integrative approach with species distribution modeling and population genetic analyses based on approximate Bayesian computation to determine whether this view applies to organisms with inherent high dispersal capacities. Genetic diversity was found to be higher in island than in continental populations, contributing to mounting evidence that, contrary to theoretical expectations, island populations are not necessarily genetically depauperate. Patterns of genetic variation among island and continental populations consistently fitted those simulated under a scenario of de novo foundation of continental populations from insular ancestors better than those expected if islands would represent a sink or a refugium of continental biodiversity. We, suggest that the northeastern Atlantic archipelagos have played a key role as a stepping stone for transoceanic migrants. Our results challenge the traditional notion that oceanic islands are the end of the colonization road and illustrate the significant role of oceanic islands as reservoirs of novel biodiversity for the assembly of continental floras.
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
This study describes the validation of a new wearable system for assessment of 3D spatial parameters of gait. The new method is based on the detection of temporal parameters, coupled to optimized fusion and de-drifted integration of inertial signals. Composed of two wirelesses inertial modules attached on feet, the system provides stride length, stride velocity, foot clearance, and turning angle parameters at each gait cycle, based on the computation of 3D foot kinematics. Accuracy and precision of the proposed system were compared to an optical motion capture system as reference. Its repeatability across measurements (test-retest reliability) was also evaluated. Measurements were performed in 10 young (mean age 26.1±2.8 years) and 10 elderly volunteers (mean age 71.6±4.6 years) who were asked to perform U-shaped and 8-shaped walking trials, and then a 6-min walking test (6MWT). A total of 974 gait cycles were used to compare gait parameters with the reference system. Mean accuracy±precision was 1.5±6.8cm for stride length, 1.4±5.6cm/s for stride velocity, 1.9±2.0cm for foot clearance, and 1.6±6.1° for turning angle. Difference in gait performance was observed between young and elderly volunteers during the 6MWT particularly in foot clearance. The proposed method allows to analyze various aspects of gait, including turns, gait initiation and termination, or inter-cycle variability. The system is lightweight, easy to wear and use, and suitable for clinical application requiring objective evaluation of gait outside of the lab environment.
Resumo:
Summary (in English) Computer simulations provide a practical way to address scientific questions that would be otherwise intractable. In evolutionary biology, and in population genetics in particular, the investigation of evolutionary processes frequently involves the implementation of complex models, making simulations a particularly valuable tool in the area. In this thesis work, I explored three questions involving the geographical range expansion of populations, taking advantage of spatially explicit simulations coupled with approximate Bayesian computation. First, the neutral evolutionary history of the human spread around the world was investigated, leading to a surprisingly simple model: A straightforward diffusion process of migrations from east Africa throughout a world map with homogeneous landmasses replicated to very large extent the complex patterns observed in real human populations, suggesting a more continuous (as opposed to structured) view of the distribution of modern human genetic diversity, which may play a better role as a base model for further studies. Second, the postglacial evolution of the European barn owl, with the formation of a remarkable coat-color cline, was inspected with two rounds of simulations: (i) determine the demographic background history and (ii) test the probability of a phenotypic cline, like the one observed in the natural populations, to appear without natural selection. We verified that the modern barn owl population originated from a single Iberian refugium and that they formed their color cline, not due to neutral evolution, but with the necessary participation of selection. The third and last part of this thesis refers to a simulation-only study inspired by the barn owl case above. In this chapter, we showed that selection is, indeed, effective during range expansions and that it leaves a distinguished signature, which can then be used to detect and measure natural selection in range-expanding populations. Résumé (en français) Les simulations fournissent un moyen pratique pour répondre à des questions scientifiques qui seraient inabordable autrement. En génétique des populations, l'étude des processus évolutifs implique souvent la mise en oeuvre de modèles complexes, et les simulations sont un outil particulièrement précieux dans ce domaine. Dans cette thèse, j'ai exploré trois questions en utilisant des simulations spatialement explicites dans un cadre de calculs Bayésiens approximés (approximate Bayesian computation : ABC). Tout d'abord, l'histoire de la colonisation humaine mondiale et de l'évolution de parties neutres du génome a été étudiée grâce à un modèle étonnement simple. Un processus de diffusion des migrants de l'Afrique orientale à travers un monde avec des masses terrestres homogènes a reproduit, dans une très large mesure, les signatures génétiques complexes observées dans les populations humaines réelles. Un tel modèle continu (opposé à un modèle structuré en populations) pourrait être très utile comme modèle de base dans l'étude de génétique humaine à l'avenir. Deuxièmement, l'évolution postglaciaire d'un gradient de couleur chez l'Effraie des clocher (Tyto alba) Européenne, a été examiné avec deux séries de simulations pour : (i) déterminer l'histoire démographique de base et (ii) tester la probabilité qu'un gradient phénotypique, tel qu'observé dans les populations naturelles puisse apparaître sans sélection naturelle. Nous avons montré que la population actuelle des chouettes est sortie d'un unique refuge ibérique et que le gradient de couleur ne peux pas s'être formé de manière neutre (sans l'action de la sélection naturelle). La troisième partie de cette thèse se réfère à une étude par simulations inspirée par l'étude de l'Effraie. Dans ce dernier chapitre, nous avons montré que la sélection est, en effet, aussi efficace dans les cas d'expansion d'aire de distribution et qu'elle laisse une signature unique, qui peut être utilisée pour la détecter et estimer sa force.
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.