962 resultados para Fast view-matching algorithm
Resumo:
The prediction of binding modes (BMs) occurring between a small molecule and a target protein of biological interest has become of great importance for drug development. The overwhelming diversity of needs leaves room for docking approaches addressing specific problems. Nowadays, the universe of docking software ranges from fast and user friendly programs to algorithmically flexible and accurate approaches. EADock2 is an example of the latter. Its multiobjective scoring function was designed around the CHARMM22 force field and the FACTS solvation model. However, the major drawback of such a software design lies in its computational cost. EADock dihedral space sampling (DSS) is built on the most efficient features of EADock2, namely its hybrid sampling engine and multiobjective scoring function. Its performance is equivalent to that of EADock2 for drug-like ligands, while the CPU time required has been reduced by several orders of magnitude. This huge improvement was achieved through a combination of several innovative features including an automatic bias of the sampling toward putative binding sites, and a very efficient tree-based DSS algorithm. When the top-scoring prediction is considered, 57% of BMs of a test set of 251 complexes were reproduced within 2 Å RMSD to the crystal structure. Up to 70% were reproduced when considering the five top scoring predictions. The success rate is lower in cross-docking assays but remains comparable with that of the latest version of AutoDock that accounts for the protein flexibility. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011.
Resumo:
INTRODUCTION. Patient-ventilator asynchrony is a frequent issue in non invasivemechanical ventilation (NIV) and leaks at the patient-mask interface play a major role in itspathogenesis. NIV algorithms alleviate the deleterious impact of leaks and improve patient-ventilator interaction. Neurally adusted ventilatory assist (NAVA), a neurally triggered modethat avoids interferences between leaks and the usual pneumatic trigger, could further improvepatient-ventilator interaction in NIV patients.OBJECTIVES. To evaluate the feasibility ofNAVAin patients receiving a prophylactic postextubationNIV and to compare the respective impact ofPSVandNAVAwith and withoutNIValgorithm on patient-ventilator interaction.METHODS. Prospective study conducted in 16 beds adult critical care unit (ICU) in a tertiaryuniversity hospital. Over a 2 months period, were included 17 adult medical ICU patientsextubated for less than 2 h and in whom a prophylactic post-extubation NIV was indicated.Patients were randomly mechanically ventilated for 10 min with: PSV without NIV algorithm(PSV-NIV-), PSV with NIV algorithm (PSV-NIV+),NAVAwithout NIV algorithm (NAVANIV-)and NAVA with NIV algorithm (NAVA-NIV+). Breathing pattern descriptors, diaphragmelectrical activity, leaks volume, inspiratory trigger delay (Tdinsp), inspiratory time inexcess (Tiexcess) and the five main asynchronies were quantified. Asynchrony index (AI) andasynchrony index influenced by leaks (AIleaks) were computed.RESULTS. Peak inspiratory pressure and diaphragm electrical activity were similar in thefour conditions. With both PSV and NAVA, NIV algorithm significantly reduced the level ofleak (p\0.01). Tdinsp was not affected by NIV algorithm but was shorter in NAVA than inPSV (p\0.01). Tiexcess was shorter in NAVA and PSV-NIV+ than in PSV-NIV- (p\0.05).The prevalence of double triggering was significantly lower in PSV-NIV+ than in NAVANIV+.As compared to PSV,NAVAsignificantly reduced the prevalence of premature cyclingand late cycling while NIV algorithm did not influenced premature cycling. AI was not affectedby NIV algorithm but was significantly lower in NAVA than in PSV (p\0.05). AIleaks wasquasi null with NAVA and significantly lower than in PSV (p\0.05).CONCLUSIONS. NAVA is feasible in patients receiving a post-extubation prophylacticNIV. NAVA and NIV improve patient-ventilator synchrony in different manners. NAVANIV+offers the best patient-ventilator interaction. Clinical studies are required to assess thepotential clinical benefit of NAVA in patients receiving NIV.
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
The in situ hybridization Allen Mouse Brain Atlas was mined for proteases expressed in the somatosensory cerebral cortex. Among the 480 genes coding for protease/peptidases, only four were found enriched in cortical interneurons: Reln coding for reelin; Adamts8 and Adamts15 belonging to the class of metzincin proteases involved in reshaping the perineuronal net (PNN) and Mme encoding for Neprilysin, the enzyme degrading amyloid β-peptides. The pattern of expression of metalloproteases (MPs) was analyzed by single-cell reverse transcriptase multiplex PCR after patch clamp and was compared with the expression of 10 canonical interneurons markers and 12 additional genes from the Allen Atlas. Clustering of these genes by K-means algorithm displays five distinct clusters. Among these five clusters, two fast-spiking interneuron clusters expressing the calcium-binding protein Pvalb were identified, one co-expressing Pvalb with Sst (PV-Sst) and another co-expressing Pvalb with three metallopeptidases Adamts8, Adamts15 and Mme (PV-MP). By using Wisteria floribunda agglutinin, a specific marker for PNN, PV-MP interneurons were found surrounded by PNN, whereas the ones expressing Sst, PV-Sst, were not.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
This paper proposes a very fast method for blindly approximating a nonlinear mapping which transforms a sum of random variables. The estimation is surprisingly good even when the basic assumption is not satisfied.We use the method for providing a good initialization for inverting post-nonlinear mixtures and Wiener systems. Experiments show that the algorithm speed is strongly improved and the asymptotic performance is preserved with a very low extra computational cost.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose.
Resumo:
Tämä diplomityö on tehty Lappeenrannassa Telecom Business Research Centerin 5T-projektiin liittyen. Työssä tutkitaan matkaviestinnän lisäarvopalveluiden liiketoimintakonsepteja operaattoreiden näkökulmasta. Lisäarvopalvelut laajentavat operaattoreiden palveluvalikoimaa. Niiden osuuden telekommunikaatioalan yritysten ja erityisesti operaattoreiden tuotoista on ennustettu kasvavan huomattavasti. Työn tärkeimpänä tavoitteena on tuoda uusia näkökulmia ja lisätä ymmärrystä lisäarvopalveluiden liiketoimintakonseptin rakentamisprosessista. Tätä tietämystä käytetään edesauttamaan työn empiirisessä osuudessa tutkitun Content Gateway -tuotteen liiketoimintaa. Tarjoamalla nopean liitynnän ja laskutuskanavan ulkopuolisten palveluntarjoajien ja operaattorin välille tämä tuote mahdollistaa operaattorille ja palveluntarjoajille lisäarvopalveluiden liiketoiminnan käynnistämisen. Lisäarvopalveluiden arvonluontiprosessi vaatii lukuisia yhteistyötä tekeviä osapuolia, joiden yhteistoiminta on dynaamista ja tiedonvälitys avointa, interaktiivista ja nopeaa. Arvonluontiin liittyy myös monia konvergoituvia kehityssuuntia. Perinteinen arvoketjuajattelu on riittämätön uuteen, verkottuneeseen toimintaympäristöön ja sen tilalle on tullut modernimpi arvoverkostomalli. Arvoverkosto luo kilpailuetunsa muita verkostoja vastaan jakamalla resurssit ja kompetenssit optimaalisesti ja liittämällä strategisen ja operationaalisen johtamisen kulttuurit toisiinsa. Tässä työssä verrataan arvoverkoston teoreettisia tavoitteita kahteen lisäarvopalveluiden liiketoimintakonseptiin. Näistä ensimmäinen, i-mode –niminen konsepti on valittu vertailuun edistyksellisyytensä ja tulevaa kehitystä ennakoivien ominaispiirteidensä vuoksi. Toinen esimerkkikonsepti on rakennettu edellä mainitun Content Gateway -tuotteen ympärille. Tutkimus sisältää mm. liikekumppaneiden hankinnan, ansaintalogiikoiden ja verkostojen johtamisen analysoinnin. Työn tuloksena on saatu ohjeita siihen, miten operaattori voi rakentaa tällaista konseptia ja mitä seikkoja tulee ottaa huomioon erityisesti sanomapalveluihin liittyvässä liiketoiminnassa.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
This paper studies the incidence and consequences of the mismatch between formal education and the educational requirements of jobs in Estonia during the years 1997-2003. We fi nd large wage penalties associated with the phenomenon of educational mismatch. Moreover, the incidence and wage penalty of mismatches increase with age. This suggests that structural educational mismatches can occur after fast transition periods. Our results are robust for various methodologies, and more importantly regarding departures from the exogeneity assumptions inherent in the matching estimators used in our analysis
Resumo:
This paper proposes a content based image retrieval (CBIR) system using the local colour and texture features of selected image sub-blocks and global colour and shape features of the image. The image sub-blocks are roughly identified by segmenting the image into partitions of different configuration, finding the edge density in each partition using edge thresholding, morphological dilation. The colour and texture features of the identified regions are computed from the histograms of the quantized HSV colour space and Gray Level Co- occurrence Matrix (GLCM) respectively. A combined colour and texture feature vector is computed for each region. The shape features are computed from the Edge Histogram Descriptor (EHD). A modified Integrated Region Matching (IRM) algorithm is used for finding the minimum distance between the sub-blocks of the query and target image. Experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods