995 resultados para Combinatorial analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanism of action and properties of a solid-phase ligand library made of hexapeptides (combinatorial peptide ligand libraries or CPLL, for capturing the "hidden proteome", i.e. the low- and very low-abundance proteins Constituting the vast majority of species in any proteome. as applied to plant tissues, are reviewed here. Plant tissues are notoriously recalcitrant to protein extraction and to proteome analysis, Firstly, rigid plant cell walls need to be mechanically disrupted to release the cell content and, in addition to their poor protein yield, plant tissues are rich in proteases and oxidative enzymes, contain phenolic Compounds, starches, oils, pigments and secondary metabolites that massively contaminate protein extracts. In addition, complex matrices of polysaccharides, including large amount of anionic pectins, are present. All these species compete with the binding of proteins to the CPLL beads, impeding proper capture and identification I detection of low-abundance species. When properly pre-treated, plant tissue extracts are amenable to capture by the CPLL beads revealing thus many new species among them low-abundance proteins. Examples are given on the treatment of leaf proteins, of corn seed extracts and of exudate proteins (latex from Hevea brasiliensis). In all cases, the detection of unique gene products via CPLL Capture is at least twice that of control, untreated sample. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discovery of cis-regulatory elements in gene promoters is a highly challenging research issue in computational molecular biology. This paper presents a novel approach to searching putative cis-regulatory elements in human promoters by first finding 8-mer sequences of high statistical significance from gene promoters of humans, mice, and Drosophila melanogaster, respectively, and then identifying the most conserved ones across the three species (phylogenetic footprinting). In this study, a conservation analysis on both closely related species (humans and mice) and distantly related species (humans/mice and Drosophila) is conducted not only to examine more candidates but also to improve the prediction accuracy. We have found 124 putative cis-regulatory elements and grouped these into 20 clusters. The investigation on the coexistence of these clusters in human gene promoters reveals that SP1, EGR, and NRF-1 are the dominant clusters appearing in the combinatorial combination of up to five clusters. Gene Ontology (GO) analysis also shows that many GO categories of transcription factors binding to these cis-regulatory elements match the GO categories of genes whose promoters contain these elements. Compared with previous research, the contribution of this study lies not only in the finding of new cis-regulatory elements, but also in its pioneering exploration on the coexistence of discovered elements and the GO relationship between transcription factors and regulated genes. This exploration verifies the putative cis-regulatory elements that have been found from this study and also gives new insight on the regulation mechanisms of gene expression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optimization problem arising in the analysis of controllability and stabilization of cycles in discrete time chaotic systems is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature selection is an important technique in dealing with application problems with large number of variables and limited training samples, such as image processing, combinatorial chemistry, and microarray analysis. Commonly employed feature selection strategies can be divided into filter and wrapper. In this study, we propose an embedded two-layer feature selection approach to combining the advantages of filter and wrapper algorithms while avoiding their drawbacks. The hybrid algorithm, called GAEF (Genetic Algorithm with embedded filter), divides the feature selection process into two stages. In the first stage, Genetic Algorithm (GA) is employed to pre-select features while in the second stage a filter selector is used to further identify a small feature subset for accurate sample classification. Three benchmark microarray datasets are used to evaluate the proposed algorithm. The experimental results suggest that this embedded two-layer feature selection strategy is able to improve the stability of the selection results as well as the sample classification accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In conformational analysis, the systematic search method completely maps the space but suffers from the combinatorial explosion problem because the number of conformations increases exponentially with the number of free rotation angles. This study introduces a new methodology of conformational analysis that controls the combinatorial explosion. It is based on a dimensional reduction of the system through the use of principal component analysis. The results are exactly the same as those obtained for the complete search but, in this case, the number of conformations increases only quadratically with the number of free rotation angles. The method is applied to a series of three drugs: omeprazole. pantoprazole, lansoprazole-benzimidazoles that suppress gastric-acid secretion by means of H(+), K(+)-ATPase enzyme inhibition. (C) 2002 John Wiley Sons. Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hierarchy of the segmentation cascade responsible for establishing the Drosophila body plan is composed by gap, pair-rule and segment polarity genes. However, no pair-rule stripes are formed in the anterior regions of the embryo. This lack of stripe formation, as well as other evidence from the literature that is further investigated here, led us to the hypothesis that anterior gap genes might be involved in a combinatorial mechanism responsible for repressing the cis-regulatory modules (CRMs) of hairy (h), even-skipped (eve), runt (run), and fushi-tarazu (ftz) anterior-most stripes. In this study, we investigated huckebein (hkb), which has a gap expression domain at the anterior tip of the embryo. Using genetic methods we were able to detect deviations from the wild-type patterns of the anterior-most pair-rule stripes in different genetic backgrounds, which were consistent with Hkb-mediated repression. Moreover, we developed an image processing tool that, for the most part, confirmed our assumptions. Using an hkb misexpression system, we further detected specific repression on anterior stripes. Furthermore, bioinformatics analysis predicted an increased significance of binding site clusters in the CRMs of h 1, eve 1, run 1 and ftz 1 when Hkb was incorporated in the analysis, indicating that Hkb plays a direct role in these CRMs. We further discuss that Hkb and Slp1, which is the other previously identified common repressor of anterior stripes, might participate in a combinatorial repression mechanism controlling stripe CRMs in the anterior parts of the embryo and define the borders of these anterior stripes. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]A new algorithm for evaluating the top event probability of large fault trees (FTs) is presented. This algorithm does not require any previous qualitative analysis of the FT. Indeed, its efficiency is independent of the FT logic, and it only depends on the number n of basic system components and on their failure probabilities. Our method provides exact lower and upper bounds on the top event probability by using new properties of the intrinsic order relation between binary strings. The intrinsic order enables one to select binary n-tuples with large occurrence probabilities without necessity to evaluate them. This drastically reduces the complexity of the problem from exponential (2n binary n-tuples) to linear (n Boolean variables)...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Persistent Topology is an innovative way of matching topology and geometry, and it proves to be an effective mathematical tool in shape analysis. In order to express its full potential for applications, it has to interface with the typical environment of Computer Science: It must be possible to deal with a finite sampling of the object of interest, and with combinatorial representations of it. Following that idea, the main result claims that it is possible to construct a relation between the persistent Betti numbers (PBNs; also called rank invariant) of a compact, Riemannian submanifold X of R^m and the ones of an approximation U of X itself, where U is generated by a ball covering centered in the points of the sampling. Moreover we can state a further result in which, this time, we relate X with a finite simplicial complex S generated, thanks to a particular construction, by the sampling points. To be more precise, strict inequalities hold only in "blind strips'', i.e narrow areas around the discontinuity sets of the PBNs of U (or S). Out of the blind strips, the values of the PBNs of the original object, of the ball covering of it, and of the simplicial complex coincide, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formation of cartilage and bone involves sequential processes in which undifferentiated mesenchyme aggregates into primordial condensations which subsequently grow and differentiate, resulting in morphogenesis of the adult skeleton. While much has been learned about the structural molecules which comprise cartilage and bone, little is known about the nuclear factors which regulate chondrogenesis and osteogenesis. MHox is a homeobox-containing gene which is expressed in the mesenchyme of facial, limb, and vertebral skeletal precursors during mouse embryogenesis. MHox expression has been shown to require epithelial-derived signals, suggesting that MHox may regulate the epithelial-mesenchymal interactions required for skeletal organogenesis. To determine the functions of MHox, we generated a loss-of-function mutation in the MHox gene. Mice homozygous for a mutant MHox allele exhibit defects of skeletogenesis, involving the loss or malformation of craniofacial, limb and vertebral skeletal structures. The affected skeletal elements are derived from the cranial neural crest, as well as somitic and lateral mesoderm. Analysis of the mutant phenotype during ontogeny demonstrated a defect in the formation or growth of chondrogenic and osteogenic precursors. These findings provide evidence that MHox regulates the formation of preskeletal condensations from undifferentiated mesenchyme. In addition, generation of mice doubly mutant for the MHox and S8 homeobox genes reveal that these two genes interact to control formation of the limb and craniofacial skeleton. Mice carrying mutant alleles for S8 and MHox exhibit an exaggeration of the craniofacial and limb phenotypes observed in the MHox mutant mouse. Thus, MHox and S8 are components of a combinatorial genetic code controlling generation of the skeleton of the skull and limbs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Site-directed mutagenesis and combinatorial libraries are powerful tools for providing information about the relationship between protein sequence and structure. Here we report two extensions that expand the utility of combinatorial mutagenesis for the quantitative assessment of hypotheses about the determinants of protein structure. First, we show that resin-splitting technology, which allows the construction of arbitrarily complex libraries of degenerate oligonucleotides, can be used to construct more complex protein libraries for hypothesis testing than can be constructed from oligonucleotides limited to degenerate codons. Second, using eglin c as a model protein, we show that regression analysis of activity scores from library data can be used to assess the relative contributions to the specific activity of the amino acids that were varied in the library. The regression parameters derived from the analysis of a 455-member sample from a library wherein four solvent-exposed sites in an α-helix can contain any of nine different amino acids are highly correlated (P < 0.0001, R2 = 0.97) to the relative helix propensities for those amino acids, as estimated by a variety of biophysical and computational techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synaptotagmins (Syts) are a family of vesicle proteins that have been implicated in both regulated neurosecretion and general membrane trafficking. Calcium-dependent interactions mediated through their C2 domains are proposed to contribute to the mechanism by which Syts trigger calcium-dependent neurotransmitter release. Syt IV is a novel member of the Syt family that is induced by cell depolarization and has a rapid rate of synthesis and a short half-life. Moreover, the C2A domain of Syt IV does not bind calcium. We have examined the biochemical and functional properties of the C2 domains of Syt IV. Consistent with its non–calcium binding properties, the C2A domain of Syt IV binds syntaxin isoforms in a calcium-independent manner. In neuroendocrine pheochromocytoma (PC12) cells, Syt IV colocalizes with Syt I in the tips of the neurites. Microinjection of the C2A domain reveals that calcium-independent interactions mediated through this domain of Syt IV inhibit calcium-mediated neurotransmitter release from PC12 cells. Conversely, the C2B domain of Syt IV contains calcium binding properties, which permit homo-oligomerization as well as hetero-oligomerization with Syt I. Our observation that different combinatorial interactions exist between Syt and syntaxin isoforms, coupled with the calcium stimulated hetero-oligomerization of Syt isoforms, suggests that the secretory machinery contains a vast repertoire of biochemical properties for sensing calcium and regulating neurotransmitter release accordingly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe here a method, based on iterative colony filter screening, for the rapid isolation of binding specificities from a large synthetic repertoire of human antibody fragments in single-chain Fv configuration. Escherichia coli cells, expressing the library of antibody fragments, are grown on a porous master filter, in contact with a second filter coated with the antigen, onto which antibodies secreted by the bacteria are able to diffuse. Detection of antigen binding on the second filter allows the recovery of a number of E.coli cells, including those expressing the binding specificity of interest, which can be submitted to a second round of screening for the isolation of specific monoclonal antibodies. We tested the methodology using as antigen the ED-B domain of fibronectin, a marker of angiogenesis. From an antibody library of 7 × 108 clones, we recovered a number of specifically-binding antibodies of different aminoacid sequence. The antibody clone showing the strongest enzyme-linked immunosorbent assay signal (ME4C) was further characterised. Its epitope on the ED-B domain was mapped using the SPOT synthesis method, which uses a set of decapeptides spanning the antigen sequence synthesised and anchored on cellulose. ME4C binds to the ED-B domain with a dissociation constant Kd = 1 × 10–7 M and specifically stains tumour blood vessels, as shown by immunohistochemical analysis on tumour sections of human and murine origin.