52 resultados para Algorithms, Properties, the KCube Graphs
em Université de Lausanne, Switzerland
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
The Constructive Thinking Inventory (CTI) measures cognitive coping strategies used in everyday problem solving. The main objective of this study was to assess the factorial structure, the internal consistency, the correspondence with the American normative values, and the discriminant validity of the French translation. A community sample of 777 students aged 12 to 26 years, recruited from schools, colleges and universities, answered the 108item selfreport CTI questionnaire during a class period. A sample of 60 male adolescent offenders aged 13 to 18 years, recruited from two institutions for juvenile offenders, answered the CTI during an individual interview. Results show that the French translation of the CTI follows an identical factorial structure as the Epstein's American version in both adolescents and young adults, and that its internal consistency is satisfactory. Differences in Constructive Thinking profiles according to gender and age and between Swiss and American samples, are discussed. Juvenile offenders differed from community youths on most of the scales, speaking for a good discriminant validity of the CTI. In conclusion, the French translation of the CTI appears to preserve the original version's psychometric properties. The present study provides normative values from a community sample of Swiss adolescents and young adults.
Resumo:
We propose a compressive sensing algorithm that exploits geometric properties of images to recover images of high quality from few measurements. The image reconstruction is done by iterating the two following steps: 1) estimation of normal vectors of the image level curves, and 2) reconstruction of an image fitting the normal vectors, the compressed sensing measurements, and the sparsity constraint. The proposed technique can naturally extend to nonlocal operators and graphs to exploit the repetitive nature of textured images to recover fine detail structures. In both cases, the problem is reduced to a series of convex minimization problems that can be efficiently solved with a combination of variable splitting and augmented Lagrangian methods, leading to fast and easy-to-code algorithms. Extended experiments show a clear improvement over related state-of-the-art algorithms in the quality of the reconstructed images and the robustness of the proposed method to noise, different kind of images, and reduced measurements.
Resumo:
We have used massively parallel signature sequencing (MPSS) to sample the transcriptomes of 32 normal human tissues to an unprecedented depth, thus documenting the patterns of expression of almost 20,000 genes with high sensitivity and specificity. The data confirm the widely held belief that differences in gene expression between cell and tissue types are largely determined by transcripts derived from a limited number of tissue-specific genes, rather than by combinations of more promiscuously expressed genes. Expression of a little more than half of all known human genes seems to account for both the common requirements and the specific functions of the tissues sampled. A classification of tissues based on patterns of gene expression largely reproduces classifications based on anatomical and biochemical properties. The unbiased sampling of the human transcriptome achieved by MPSS supports the idea that most human genes have been mapped, if not functionally characterized. This data set should prove useful for the identification of tissue-specific genes, for the study of global changes induced by pathological conditions, and for the definition of a minimal set of genes necessary for basic cell maintenance. The data are available on the Web at http://mpss.licr.org and http://sgb.lynxgen.com.
Resumo:
Summary: Lipophilicity plays an important role in the determination and the comprehension of the pharmacokinetic behavior of drugs. It is usually expressed by the partition coefficient (log P) in the n-octanol/water system. The use of an additional solvent system (1,2-dichlorethane/water) is necessary to obtain complementary information, as the log Poct values alone are not sufficient to explain ail biological properties. The aim of this thesis is to develop tools allowing to predict lipophilicity of new drugs and to analyze the information yielded by those log P values. Part I presents the development of theoretical models used to predict lipophilicity. Chapter 2 shows the necessity to extend the existing solvatochromic analyses in order to predict correctly the lipophilicity of new and complex neutral compounds. In Chapter 3, solvatochromic analyses are used to develop a model for the prediction of the lipophilicity of ions. A global model was obtained allowing to estimate the lipophilicity of neutral, anionic and cationic solutes. Part II presents the detailed study of two physicochemical filters. Chapter 4 shows that the Discovery RP Amide C16 stationary phase allows to estimate lipophilicity of the neutral form of basic and acidic solutes, except of lipophilic acidic solutes. Those solutes present additional interactions with this particular stationary phase. In Chapter 5, 4 different IANI stationary phases are investigated. For neutral solutes, linear data are obtained whatever the IANI column used. For the ionized solutes, their retention is due to a balance of electrostatic and hydrophobie interactions. Thus no discrimination is observed between different series of solutes bearing the same charge, from one column to an other. Part III presents two examples illustrating the information obtained thanks to Structure-Properties Relationships (SPR). Comparing graphically lipophilicity values obtained in two different solvent systems allows to reveal the presence of intramolecular effects .such as internai H-bond (Chapter 6). SPR is used to study the partitioning of ionizable groups encountered in Medicinal Chemistry (Chapter7). Résumé La lipophilie joue un .rôle important dans la détermination et la compréhension du comportement pharmacocinétique des médicaments. Elle est généralement exprimée par le coefficient de partage (log P) d'un composé dans le système de solvants n-octanol/eau. L'utilisation d'un deuxième système de solvants (1,2-dichloroéthane/eau) s'est avérée nécessaire afin d'obtenir des informations complémentaires, les valeurs de log Poct seules n'étant pas suffisantes pour expliquer toutes les propriétés biologiques. Le but de cette thèse est de développer des outils permettant de prédire la lipophilie de nouveaux candidats médicaments et d'analyser l'information fournie par les valeurs de log P. La Partie I présente le développement de modèles théoriques utilisés pour prédire la lipophilie. Le chapitre 2 montre la nécessité de mettre à jour les analyses solvatochromiques existantes mais inadaptées à la prédiction de la lipophilie de nouveaux composés neutres. Dans le chapitre 3, la même méthodologie des analyses solvatochromiques est utilisée pour développer un modèle permettant de prédire la lipophilie des ions. Le modèle global obtenu permet la prédiction de la lipophilie de composés neutres, anioniques et cationiques. La Partie II présente l'étude approfondie de deux filtres physicochimiques. Le Chapitre 4 montre que la phase stationnaire Discovery RP Amide C16 permet la détermination de la lipophilie de la forme neutre de composés basiques et acides, à l'exception des acides très lipophiles. Ces derniers présentent des interactions supplémentaires avec cette phase stationnaire. Dans le Chapitre 5, 4 phases stationnaires IAM sont étudiées. Pour les composés neutres étudiés, des valeurs de rétention linéaires sont obtenues, quelque que soit la colonne IAM utilisée. Pour les composés ionisables, leur rétention est due à une balance entre des interactions électrostatiques et hydrophobes. Donc aucune discrimination n'est observée entre les différentes séries de composés portant la même charge d'une colonne à l'autre. La Partie III présente deux exemples illustrant les informations obtenues par l'utilisation des relations structures-propriétés. Comparer graphiquement la lipophilie mesurée dans deux différents systèmes de solvants permet de mettre en évidence la présence d'effets intramoléculaires tels que les liaisons hydrogène intramoléculaires (Chapitre 6). Cette approche des relations structures-propriétés est aussi appliquée à l'étude du partage de fonctions ionisables rencontrées en Chimie Thérapeutique (Chapitre 7) Résumé large public Pour exercer son effet thérapeutique, un médicament doit atteindre son site d'action en quantité suffisante. La quantité effective de médicament atteignant le site d'action dépend du nombre d'interactions entre le médicament et de nombreux constituants de l'organisme comme, par exemple, les enzymes du métabolisme ou les membranes biologiques. Le passage du médicament à travers ces membranes, appelé perméation, est un paramètre important à optimiser pour développer des médicaments plus puissants. La lipophilie joue un rôle clé dans la compréhension de la perméation passive des médicaments. La lipophilie est généralement exprimée par le coefficient de partage (log P) dans le système de solvants (non miscibles) n-octanol/eau. Les valeurs de log Poct seules se sont avérées insuffisantes pour expliquer la perméation à travers toutes les différentes membranes biologiques du corps humain. L'utilisation d'un système de solvants additionnel (le système 1,2-dichloroéthane/eau) a permis d'obtenir les informations complémentaires indispensables à une bonne compréhension du processus de perméation. Un grand nombre d'outils expérimentaux et théoriques sont à disposition pour étudier la lipophilie. Ce travail de thèse se focalise principalement sur le développement ou l'amélioration de certains de ces outils pour permettre leur application à un champ plus large de composés. Voici une brève description de deux de ces outils: 1)La factorisation de la lipophilie en fonction de certaines propriétés structurelles (telle que le volume) propres aux composés permet de développer des modèles théoriques utilisables pour la prédiction de la lipophilie de nouveaux composés ou médicaments. Cette approche est appliquée à l'analyse de la lipophilie de composés neutres ainsi qu'à la lipophilie de composés chargés. 2)La chromatographie liquide à haute pression sur phase inverse (RP-HPLC) est une méthode couramment utilisée pour la détermination expérimentale des valeurs de log Poct.
Resumo:
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Resumo:
The epithelial amiloride-sensitive sodium channel (ENaC) controls transepithelial Na+ movement in Na(+)-transporting epithelia and is associated with Liddle syndrome, an autosomal dominant form of salt-sensitive hypertension. Detailed analysis of ENaC channel properties and the functional consequences of mutations causing Liddle syndrome has been, so far, limited by lack of a method allowing specific and quantitative detection of cell-surface-expressed ENaC. We have developed a quantitative assay based on the binding of 125I-labeled M2 anti-FLAG monoclonal antibody (M2Ab*) directed against a FLAG reporter epitope introduced in the extracellular loop of each of the alpha, beta, and gamma ENaC subunits. Insertion of the FLAG epitope into ENaC sequences did not change its functional and pharmacological properties. The binding specificity and affinity (Kd = 3 nM) allowed us to correlate in individual Xenopus oocytes the macroscopic amiloride-sensitive sodium current (INa) with the number of ENaC wild-type and mutant subunits expressed at the cell surface. These experiments demonstrate that: (i) only heteromultimeric channels made of alpha, beta, and gamma ENaC subunits are maximally and efficiently expressed at the cell surface; (ii) the overall ENaC open probability is one order of magnitude lower than previously observed in single-channel recordings; (iii) the mutation causing Liddle syndrome (beta R564stop) enhances channel activity by two mechanisms, i.e., by increasing ENaC cell surface expression and by changing channel open probability. This quantitative approach provides new insights on the molecular mechanisms underlying one form of salt-sensitive hypertension.
Resumo:
Previous studies reported on the association of left ventricular mass index (LVMI) with urinary sodium or with circulating or urinary aldosterone. We investigated the independent associations of LVMI with the urinary excretion of both sodium and aldosterone. We randomly recruited 317 untreated subjects from a white population (45.1% women; mean age 48.2 years). Measurements included echocardiographic left ventricular (LV) properties, the 24-hour urinary excretion of sodium and aldosterone, plasma renin activity (PRA), and proximal (RNa(prox)) and distal (RNa(dist)) renal sodium reabsorption, assessed from the endogenous lithium clearance. In multivariable-adjusted models, we expressed changes in LVMI per 1-SD increase in the explanatory variables, while accounting for sex, age, systolic blood pressure, and the waist-to-hip ratio. LVMI increased independently with the urinary excretion of both sodium (+2.48 g/m(2); P=0.005) and aldosterone (+2.63 g/m(2); P=0.004). Higher sodium excretion was associated with increased mean wall thickness (MWT: +0.126 mm, P=0.054), but with no change in LV end-diastolic diameter (LVID: +0.12 mm, P=0.64). In contrast, higher aldosterone excretion was associated with higher LVID (+0.54 mm; P=0.017), but with no change in MWT (+0.070 mm; P=0.28). Higher RNa(dist) was associated with lower relative wall thickness (-0.81x10(-2), P=0.017), because of opposite trends in LVID (+0.33 mm; P=0.13) and MWT (-0.130 mm; P=0.040). LVMI was not associated with PRA or RNa(prox.) In conclusion, LVMI independently increased with both urinary sodium and aldosterone excretion. Increased MWT explained the association of LVMI with urinary sodium and increased LVID the association of LVMI with urinary aldosterone.
Resumo:
BACKGROUND/OBJECTIVES: (1) To cross-validate tetra- (4-BIA) and octopolar (8-BIA) bioelectrical impedance analysis vs dual-energy X-ray absorptiometry (DXA) for the assessment of total and appendicular body composition and (2) to evaluate the accuracy of external 4-BIA algorithms for the prediction of total body composition, in a representative sample of Swiss children. SUBJECTS/METHODS: A representative sample of 333 Swiss children aged 6-13 years from the Kinder-Sportstudie (KISS) (ISRCTN15360785). Whole-body fat-free mass (FFM) and appendicular lean tissue mass were measured with DXA. Body resistance (R) was measured at 50 kHz with 4-BIA and segmental body resistance at 5, 50, 250 and 500 kHz with 8-BIA. The resistance index (RI) was calculated as height(2)/R. Selection of predictors (gender, age, weight, RI4 and RI8) for BIA algorithms was performed using bootstrapped stepwise linear regression on 1000 samples. We calculated 95% confidence intervals (CI) of regression coefficients and measures of model fit using bootstrap analysis. Limits of agreement were used as measures of interchangeability of BIA with DXA. RESULTS: 8-BIA was more accurate than 4-BIA for the assessment of FFM (root mean square error (RMSE)=0.90 (95% CI 0.82-0.98) vs 1.12 kg (1.01-1.24); limits of agreement 1.80 to -1.80 kg vs 2.24 to -2.24 kg). 8-BIA also gave accurate estimates of appendicular body composition, with RMSE < or = 0.10 kg for arms and < or = 0.24 kg for legs. All external 4-BIA algorithms performed poorly with substantial negative proportional bias (r> or = 0.48, P<0.001). CONCLUSIONS: In a representative sample of young Swiss children (1) 8-BIA was superior to 4-BIA for the prediction of FFM, (2) external 4-BIA algorithms gave biased predictions of FFM and (3) 8-BIA was an accurate predictor of segmental body composition.
Resumo:
Catalase is an important virulence factor for survival in macrophages and other phagocytic cells. In Chlamydiaceae, no catalase had been described so far. With the sequencing and annotation of the full genomes of Chlamydia-related bacteria, the presence of different catalase-encoding genes has been documented. However, their distribution in the Chlamydiales order and the functionality of these catalases remain unknown. Phylogeny of chlamydial catalases was inferred using MrBayes, maximum likelihood, and maximum parsimony algorithms, allowing the description of three clade 3 and two clade 2 catalases. Only monofunctional catalases were found (no catalase-peroxidase or Mn-catalase). All presented a conserved catalytic domain and tertiary structure. Enzymatic activity of cloned chlamydial catalases was assessed by measuring hydrogen peroxide degradation. The catalases are enzymatically active with different efficiencies. The catalase of Parachlamydia acanthamoebae is the least efficient of all (its catalytic activity was 2 logs lower than that of Pseudomonas aeruginosa). Based on the phylogenetic analysis, we hypothesize that an ancestral class 2 catalase probably was present in the common ancestor of all current Chlamydiales but was retained only in Criblamydia sequanensis and Neochlamydia hartmannellae. The catalases of class 3, present in Estrella lausannensis and Parachlamydia acanthamoebae, probably were acquired by lateral gene transfer from Rhizobiales, whereas for Waddlia chondrophila they likely originated from Legionellales or Actinomycetales. The acquisition of catalases on several occasions in the Chlamydiales suggests the importance of this enzyme for the bacteria in their host environment.
Resumo:
The 2008 Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee deals with the classification of high-resolution hyperspectral data from an urban area. Unlike in the previous issues of the contest, the goal was not only to identify the best algorithm but also to provide a collaborative effort: The decision fusion of the best individual algorithms was aiming at further improving the classification performances, and the best algorithms were ranked according to their relative contribution to the decision fusion. This paper presents the five awarded algorithms and the conclusions of the contest, stressing the importance of decision fusion, dimension reduction, and supervised classification methods, such as neural networks and support vector machines.
Resumo:
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.