953 resultados para Complex combinatorial problem
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Transport of peptides across the membrane of the endoplasmic reticulum for assembly with MHC class I molecules is an essential step in antigen presentation to cytotoxic T cells. This task is performed by the major histocompatibility complex-encoded transporter associated with antigen processing (TAP). Using a combinatorial approach we have analyzed the substrate specificity of human TAP at high resolution and in the absence of any given sequence context, revealing the contribution of each peptide residue in stabilizing binding to TAP. Human TAP was found to be highly selective with peptide affinities covering at least three orders of magnitude. Interestingly, the selectivity is not equally distributed over the substrate. Only the N-terminal three positions and the C-terminal residue are critical, whereas effects from other peptide positions are negligible. A major influence from the peptide backbone was uncovered by peptide scans and libraries containing d amino acids. Again, independent of peptide length, critical positions were clustered near the peptide termini. These approaches demonstrate that human TAP is selective, with residues determining the affinity located in distinct regions, and point to the role of the peptide backbone in binding to TAP. This binding mode of TAP has implications in an optimized repertoire selection and in a coevolution with the major histocompatibility complex/T cell receptor complex.
Resumo:
The central problem of complex inheritance is to map oligogenes for disease susceptibility, integrating linkage and association over samples that differ in several ways. Combination of evidence over multiple samples with 1,037 families supports loci contributing to asthma susceptibility in the cytokine region on 5q [maximum logarithm of odds (lod) = 2.61 near IL-4], but no evidence for atopy. The principal problems with retrospective collaboration on linkage appear to have been solved, providing far more information than a single study. A multipoint lod table evaluated at commonly agreed reference loci is required for both collaboration and metaanalysis, but variations in ascertainment, pedigree structure, phenotype definition, and marker selection are tolerated. These methods are invariant with statistical methods that increase the power of lods and are applicable to all diseases, motivating collaboration rather than competition. In contrast to linkage, positional cloning by allelic association has yet to be extended to multiple samples, a prerequisite for efficient combination with linkage and the greatest current challenge to genetic epidemiology.
Resumo:
Dynamic combinatorial libraries are mixtures of compounds that exist in a dynamic equilibrium and can be driven to compositional self adaptation via selective binding of a specific assembly of certain components to a molecular target. We present here an extension of this initial concept to dynamic libraries that consists of two levels, the first formed by the coordination of terpyridine-based ligands to the transition metal template, and the second, by the imine formation with the aldehyde substituents on the terpyridine moieties. Dialdehyde 7 has been synthesized, converted into a variety of ligands, oxime ethers L11–L33 and acyl hydrazones L44–L77, and subsequently into corresponding cobalt complexes. A typical complex, Co(L22)22+ is shown to engage in rapid exchange with a competing ligand L11 and with another complex, Co(L22)22+ in 30% acetonitrile/water at pH 7.0 and 25°C. The exchange in the corresponding Co(III) complexes is shown to be much slower. Imine exchange in the acyl hydrazone complexes (L44–L77) is strongly controlled by pH and temperature. The two types of exchange, ligand and imine, can thus be used as independent equilibrium processes controlled by different types of external intervention, i.e., via oxidation/reduction of the metal template and/or change in the pH/temperature of the medium. The resulting double-level dynamic libraries are therefore named orthogonal, in similarity with the orthogonal protecting groups in organic synthesis. Sample libraries of this type have been synthesized and showed the complete expected set of components in electrospray ionization MS.
Resumo:
Sed5p is the only syntaxin family member required for protein transport through the yeast Golgi and it is known to bind up to nine other soluble N-ethylmaleimide-sensitive factor attachment receptor (SNARE) proteins in vivo. We describe in vitro binding experiments in which we identify ternary and quaternary Sed5p-containing SNARE complexes. The formation of SNARE complexes among these endoplasmic reticulum- and Golgi-localized proteins requires Sed5p and is syntaxin-selective. In addition, Sed5p-containing SNARE complexes form selectively and this selectivity is mediated by Sed5p-containing intermediates that discriminate among subsequent binding partners. Although many of these SNAREs have overlapping distributions in vivo, the SNAREs that form complexes with Sed5p in vitro reflect their functionally distinct locales. Although SNARE–SNARE interactions are promiscuous and a single SNARE protein is often found in more than one complex, both the biochemical as well as genetic analyses reported here suggest that this is not a result of nonselective direct substitution of one SNARE for another. Rather our data are consistent with the existence of multiple (perhaps parallel) trafficking pathways where Sed5p-containing SNARE complexes play overlapping and/or distinct functional roles.
Resumo:
The major hurdle to be cleared in active immunotherapy of cancer is the poor immunogenicity of cancer cells. In previous attempts to overcome this problem, whole tumor cells have been used as vaccines, either admixed with adjuvant(s) or genetically engineered to express nonself proteins or immunomodulatory factors before application. We have developed a novel approach to generate an immunogeneic, highly effective vaccine: major histocompatibility complex (MHC) class I-positive cancer cells are administered together with MHC class I-matched peptide ligands of foreign, nonself origin, generated by a procedure we term transloading. Murine tumor lines of the H2-Kd or the H2-Db haplotype, melanoma M-3 and B16-F10, respectively, as well as colon carcinoma CT-26 (H2-Kd), were transloaded with MHC-matched influenza virus-derived peptides and applied as irradiated vaccines. Mice bearing a deposit of live M-3 melanoma cells were efficiently cured by this treatment. In the CT-26 colon carcinoma and the B16-F10 melanoma, high efficacies were obtained against tumor challenge, suggesting the universal applicability of this new type of vaccine. With foreign peptide ligands adapted to the requirements of a desired MHC class I haplotype, this concept may be used for the treatment of human cancers.
Resumo:
Genes containing the interferon-stimulated response element (ISRE) enhancer have been characterized as transcriptionally responsive primarily to type I interferons (IFN alpha/beta). Induction is due to activation of a multimeric transcription factor, interferon-stimulated gene factor 3 (ISGF3), which is activated by IFN alpha/beta but not by IFN gamma. We found that ISRE-containing genes were induced by IFN gamma as well as by IFN alpha in Vero cells. The IFN gamma response was dependent on the ISRE and was accentuated by preexposure of cells to IFN alpha, a treatment that increases the abundance of ISGF3 components. Overexpression of ISGF3 polypeptides showed that the IFN gamma response depended on the DNA-binding protein ISGF3 gamma (p48) as well as on the 91-kDa protein STAT91 (Stat1 alpha). The transcriptional response to IFN alpha required the 113-kDa protein STAT113 (Stat2) in addition to STAT91 and p48. Mutant fibrosarcoma cells deficient in each component of ISGF3 were used to confirm that IFN gamma induction of an ISRE reporter required p48 and STAT91, but not STAT113. A complex containing p48 and phosphorylated STAT91 but lacking STAT113 bound the ISRE in vitro. IFN gamma-induced activation of this complex, preferentially formed at high concentrations of p48 and STAT91, may explain some of the overlapping responses to IFN alpha and IFN gamma.
Resumo:
Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.
Resumo:
In this article, a new methodology is presented to obtain representation models for a priori relation z = u(x1, x2, . . . ,xn) (1), with a known an experimental dataset zi; x1i ; x2i ; x3i ; . . . ; xni i=1;2;...;p· In this methodology, a potential energy is initially defined over each possible model for the relationship (1), what allows the application of the Lagrangian mechanics to the derived system. The solution of the Euler–Lagrange in this system allows obtaining the optimal solution according to the minimal action principle. The defined Lagrangian, corresponds to a continuous medium, where a n-dimensional finite elements model has been applied, so it is possible to get a solution for the problem solving a compatible and determined linear symmetric equation system. The computational implementation of the methodology has resulted in an improvement in the process of get representation models obtained and published previously by the authors.
Resumo:
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a point in the intersection of a half-space and a potentially non-convex set which is assumed to satisfy a well-quasi-ordering property or a property weaker than compactness. In particular, the special case in which the second set is finite is covered by our framework and provides a prototypical setting for combinatorial optimization problems.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A major problem in de novo design of enzyme inhibitors is the unpredictability of the induced fit, with the shape of both ligand and enzyme changing cooperatively and unpredictably in response to subtle structural changes within a ligand. We have investigated the possibility of dampening the induced fit by using a constrained template as a replacement for adjoining segments of a ligand. The template preorganizes the ligand structure, thereby organizing the local enzyme environment. To test this approach, we used templates consisting of constrained cyclic tripeptides, formed through side chain to main chain linkages, as structural mimics of the protease-bound extended beta-strand conformation of three adjoining amino acid residues at the N- or C-terminal sides of the scissile bond of substrates. The macrocyclic templates were derivatized to a range of 30 structurally diverse molecules via focused combinatorial variation of nonpeptidic appendages incorporating a hydroxyethylamine transition-state isostere. Most compounds in the library were potent inhibitors of the test protease (HIV-1 protease). Comparison of crystal structures for five protease-inhibitor complexes containing an N-terminal macrocycle and three protease-inhibitor complexes containing a C-terminal macrocycle establishes that the macrocycles fix their surrounding enzyme environment, thereby permitting independent variation of acyclic inhibitor components with only local disturbances to the protease. In this way, the location in the protease of various acyclic fragments on either side of the macrocyclic template can be accurately predicted. This type of templating strategy minimizes the problem of induced fit, reducing unpredictable cooperative effects in one inhibitor region caused by changes to adjacent enzyme-inhibitor interactions. This idea might be exploited in template-based approaches to inhibitors of other proteases, where a beta-strand mimetic is also required for recognition, and also other protein-binding ligands where different templates may be more appropriate.
Resumo:
We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.