917 resultados para Combinatorial Grassmannian
Resumo:
Hematopoiesis is a well-established system used to study developmental choices amongst cells with multiple lineage potentials, as well as the transcription factor network interactions that drive these developmental paths. Multipotent progenitors travel from the bone marrow to the thymus where T-cell development is initiated and these early T-cell precursors retain lineage plasticity even after initiating a T-cell program. The development of these early cells is driven by Notch signaling and the combinatorial expression of many transcription factors, several of which are also involved in the development of other cell lineages. The ETS family transcription factor PU.1 is involved in the development of progenitor, myeloid, and lymphoid cells, and can divert progenitor T-cells from the T-lineage to a myeloid lineage. This diversion of early T-cells by PU.1 can be blocked by Notch signaling. The PU.1 and Notch interaction creates a switch wherein PU.1 in the presence of Notch promotes T-cell identity and PU.1 in the absence of Notch signaling promotes a myeloid identity. Here we characterized an early T-cell cell line, Scid.adh.2c2, as a good model system for studying the myeloid vs. lymphoid developmental choice dependent on PU.1 and Notch signaling. We then used the Scid.adh.2c2 system to identify mechanisms mediating PU.1 and Notch signaling interactions during early T-cell development. We show that the mechanism by which Notch signaling is protecting pro-T cells is neither degradation nor modification of the PU.1 protein. Instead we give evidence that Notch signaling is blocking the PU.1-driven inhibition of a key set of T-regulatory genes including Myb, Tcf7, and Gata3. We show that the protection of Gata3 from PU.1-mediated inhibition, by Notch signaling and Myb, is important for retaining a T-lineage identity. We also discuss a PU.1-driven mechanism involving E-protein inhibition that leads to the inhibition of Notch target genes. This is mechanism may be used as a lockdown mechanism in pro-T-cells that have made the decision to divert to the myeloid pathway.
Resumo:
This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
β-lactamases are a group of enzymes that confer resistance to penam and cephem antibiotics by hydrolysis of the β-lactam ring, thereby inactivating the antibiotic. Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Asp 132, a strictly conserved residue among the class A β-lactamases, appears to be involved in substrate binding, catalysis, or both. To study the contribution of residue 132 to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at position 132. Phenotypic screening of all mutants indicated that position 132 is very sensitive to amino acid changes, with only N132C, N132D, N132E, and N132Q showing any appreciable activity. Kinetic analysis of three of these mutants showed increases in K_M, along with substantial decreases in k_(cat). Efforts to trap a stable acyl-enzyme intermediate were unsuccessfuL These results indicate that residue 132 is involved in substrate binding, as well as catalysis, and supports the involvement of this residue in acylation as suggested by Strynadka et al.
Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Lys 73 and Glu 166, two strictly conserved residues among the class A β-lactamases, appear to be involved in substrate binding, catalysis, or both. To study the contribution of these residues to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at positions 73 and 166. Then all 400 possible combinations of mutants were created by combinatorial mutagenesis. The colonies harboring the mutants were screened for growth in the presence of ampicillin. The competent colonys' DNA were sequenced, and kinetic parameters investigated. It was found that lysine is essential at position 73, and that position 166 only tolerated fairly conservative changes (Aspartic acid, Histidine, and Tyrosine). These functional mutants exhibited decreased kcat's, but K_M was close to wild-type levels. The results of the combinatorial mutagenesis experiments indicate that Lysis absolutely required for activity at position 73; no mutation at residue 166 can compensate for loss of the long side chain amine. The active mutants found--K73K/E166D, K73KIE166H, and K73KIE166Y were studied by kinetic analysis. These results reaffirmed the function of residue 166 as important in catalysis, specifically deacylation.
The identity of the residue responsible for enhancing the active site serine (Ser 70) in RTEM-1 β-lactamase has been disputed for some time. Recently, analysis of a crystal structure of RTEM-1 β-lactamase with covalently bound intermediate was published, and it was suggested that Lys 73, a strictly conserved residue among the class A β-lactamases, was acting as a general base, activating Ser 70. For this to be possible, the pK_a of Lys 73 would have to be depressed significantly. In an attempt to assay the pK_a of Lys 73, the mutation K73C was made. This mutant protein can be reacted with 2-bromoethylamine, and activity is restored to near wild type levels. ^(15)N-2-bromoethylamine hydrobromide and ^(13)C-2-bromoethylamine hydrobromide were synthesized. Reacting these compounds with the K73C mutant gives stable isotopic enrichment at residue 73 in the form of aminoethylcysteine, a lysine homologue. The pK_a of an amine can be determined by NMR titration, following the change in chemical shift of either the ^(15)N-amine nuclei or adjacent Be nuclei as pH is changed. Unfortunately, low protein solubility, along with probable label scrambling in the Be experiment, did not permit direct observation of either the ^(15)N or ^(13)C signals. Indirect detection experiments were used to observe the protons bonded directly to the ^(13)C atoms. Two NMR signals were seen, and their chemical shift change with pH variation was noted. The peak which was determined to correspond to the aminoethylcysteine residue shifted from 3.2 ppm down to 2.8 ppm over a pH range of 6.6 to 12.5. The pK_a of the amine at position 73 was determined to be ~10. This indicates that residue 73 does not function as a general base in the acylation step of the reaction. However the experimental measurement takes place in the absence of substrate. Since the enzyme undergoes conformational changes upon substrate binding, the measured pK_a of the free enzyme may not correspond to the pK_a of the enzyme substrate complex.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
This thesis presents the development of chip-based technology for informative in vitro cancer diagnostics. In the first part of this thesis, I will present my contribution in the development of a technology called “Nucleic Acid Cell Sorting (NACS)”, based on microarrays composed of nucleic acid encoded peptide major histocompatibility complexes (p/MHC), and the experimental and theoretical methods to detect and analyze secreted proteins from single or few cells.
Secondly, a novel portable platform for imaging of cellular metabolism with radio probes is presented. A microfluidic chip, so called “Radiopharmaceutical Imaging Chip” (RIMChip), combined with a beta-particle imaging camera, is developed to visualize the uptake of radio probes in a small number of cells. Due to its sophisticated design, RIMChip allows robust and user-friendly execution of sensitive and quantitative radio assays. The performance of this platform is validated with adherent and suspension cancer cell lines. This platform is then applied to study the metabolic response of cancer cells under the treatment of drugs. Both cases of mouse lymphoma and human glioblastoma cell lines, the metabolic responses to the drug exposures are observed within a short time (~ 1 hour), and are correlated with the arrest of cell-cycle, or with changes in receptor tyrosine kinase signaling.
The last parts of this thesis present summaries of ongoing projects: development of a new agent as an in vivo imaging probe for c-MET, and quantitative monitoring of glycolytic metabolism of primary glioblastoma cells. To develop a new agent for c-MET imaging, the one-bead-one-compound combinatorial library method is used, coupled with iterative screening. The performance of the agent is quantitatively validated with cell-based fluorescent assays. In the case of monitoring the metabolism of primary glioblastoma cell, by RIMChip, cells were sorting according to their expression levels of oncoprotein, or were treated with different kinds of drugs to study the metabolic heterogeneity of cancer cells or metabolic response of glioblastoma cells to drug treatments, respectively.
Resumo:
Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
The structure of the set ϐ(A) of all eigenvalues of all complex matrices (elementwise) equimodular with a given n x n non-negative matrix A is studied. The problem was suggested by O. Taussky and some aspects have been studied by R. S. Varga and B.W. Levinger.
If every matrix equimodular with A is non-singular, then A is called regular. A new proof of the P. Camion-A.J. Hoffman characterization of regular matrices is given.
The set ϐ(A) consists of m ≤ n closed annuli centered at the origin. Each gap, ɤ, in this set can be associated with a class of regular matrices with a (unique) permutation, π(ɤ). The association depends on both the combinatorial structure of A and the size of the aii. Let A be associated with the set of r permutations, π1, π2,…, πr, where each gap in ϐ(A) is associated with one of the πk. Then r ≤ n, even when the complement of ϐ(A) has n+1 components. Further, if π(ɤ) is the identity, the real boundary points of ɤ are eigenvalues of real matrices equimodular with A. In particular, if A is essentially diagonally dominant, every real boundary point of ϐ(A) is an eigenvalues of a real matrix equimodular with A.
Several conjectures based on these results are made which if verified would constitute an extension of the Perron-Frobenius Theorem, and an algebraic method is introduced which unites the study of regular matrices with that of ϐ(A).
Resumo:
Combinatorial configurations known as t-designs are studied. These are pairs ˂B, ∏˃, where each element of B is a k-subset of ∏, and each t-design occurs in exactly λ elements of B, for some fixed integers k and λ. A theory of internal structure of t-designs is developed, and it is shown that any t-design can be decomposed in a natural fashion into a sequence of “simple” subdesigns. The theory is quite similar to the analysis of a group with respect to its normal subgroups, quotient groups, and homomorphisms. The analogous concepts of normal subdesigns, quotient designs, and design homomorphisms are all defined and used.
This structure theory is then applied to the class of t-designs whose automorphism groups are transitive on sets of t points. It is shown that if G is a permutation group transitive on sets of t letters and ф is any set of letters, then images of ф under G form a t-design whose parameters may be calculated from the group G. Such groups are discussed, especially for the case t = 2, and the normal structure of such designs is considered. Theorem 2.2.12 gives necessary and sufficient conditions for a t-design to be simple, purely in terms of the automorphism group of the design. Some constructions are given.
Finally, 2-designs with k = 3 and λ = 2 are considered in detail. These designs are first considered in general, with examples illustrating some of the configurations which can arise. Then an attempt is made to classify all such designs with an automorphism group transitive on pairs of points. Many cases are eliminated of reduced to combinations of Steiner triple systems. In the remaining cases, the simple designs are determined to consist of one infinite class and one exceptional case.
Resumo:
Systems-level studies of biological systems rely on observations taken at a resolution lower than the essential unit of biology, the cell. Recent technical advances in DNA sequencing have enabled measurements of the transcriptomes in single cells excised from their environment, but it remains a daunting technical problem to reconstruct in situ gene expression patterns from sequencing data. In this thesis I develop methods for the routine, quantitative in situ measurement of gene expression using fluorescence microscopy.
The number of molecular species that can be measured simultaneously by fluorescence microscopy is limited by the pallet of spectrally distinct fluorophores. Thus, fluorescence microscopy is traditionally limited to the simultaneous measurement of only five labeled biomolecules at a time. The two methods described in this thesis, super-resolution barcoding and temporal barcoding, represent strategies for overcoming this limitation to monitor expression of many genes in a single cell. Super-resolution barcoding employs optical super-resolution microscopy (SRM) and combinatorial labeling via-smFISH (single molecule fluorescence in situ hybridization) to uniquely label individual mRNA species with distinct barcodes resolvable at nanometer resolution. This method dramatically increases the optical space in a cell, allowing a large numbers of barcodes to be visualized simultaneously. As a proof of principle this technology was used to study the S. cerevisiae calcium stress response. The second method, sequential barcoding, reads out a temporal barcode through multiple rounds of oligonucleotide hybridization to the same mRNA. The multiplexing capacity of sequential barcoding increases exponentially with the number of rounds of hybridization, allowing over a hundred genes to be profiled in only a few rounds of hybridization.
The utility of sequential barcoding was further demonstrated by adapting this method to study gene expression in mammalian tissues. Mammalian tissues suffer both from a large amount of auto-fluorescence and light scattering, making detection of smFISH probes on mRNA difficult. An amplified single molecule detection technology, smHCR (single molecule hairpin chain reaction), was developed to allow for the quantification of mRNA in tissue. This technology is demonstrated in combination with light sheet microscopy and background reducing tissue clearing technology, enabling whole-organ sequential barcoding to monitor in situ gene expression directly in intact mammalian tissue.
The methods presented in this thesis, specifically sequential barcoding and smHCR, enable multiplexed transcriptional observations in any tissue of interest. These technologies will serve as a general platform for future transcriptomic studies of complex tissues.
Resumo:
Nas últimas décadas, o problema de escalonamento da produção em oficina de máquinas, na literatura referido como JSSP (do inglês Job Shop Scheduling Problem), tem recebido grande destaque por parte de pesquisadores do mundo inteiro. Uma das razões que justificam tamanho interesse está em sua alta complexidade. O JSSP é um problema de análise combinatória classificado como NP-Difícil e, apesar de existir uma grande variedade de métodos e heurísticas que são capazes de resolvê-lo, ainda não existe hoje nenhum método ou heurística capaz de encontrar soluções ótimas para todos os problemas testes apresentados na literatura. A outra razão basea-se no fato de que esse problema encontra-se presente no diaa- dia das indústrias de transformação de vários segmento e, uma vez que a otimização do escalonamento pode gerar uma redução significativa no tempo de produção e, consequentemente, um melhor aproveitamento dos recursos de produção, ele pode gerar um forte impacto no lucro dessas indústrias, principalmente nos casos em que o setor de produção é responsável por grande parte dos seus custos totais. Entre as heurísticas que podem ser aplicadas à solução deste problema, o Busca Tabu e o Multidão de Partículas apresentam uma boa performance para a maioria dos problemas testes encontrados na literatura. Geralmente, a heurística Busca Tabu apresenta uma boa e rápida convergência para pontos ótimos ou subótimos, contudo esta convergência é frequentemente interrompida por processos cíclicos e a performance do método depende fortemente da solução inicial e do ajuste de seus parâmetros. A heurística Multidão de Partículas tende a convergir para pontos ótimos, ao custo de um grande esforço computacional, sendo que sua performance também apresenta uma grande sensibilidade ao ajuste de seus parâmetros. Como as diferentes heurísticas aplicadas ao problema apresentam pontos positivos e negativos, atualmente alguns pesquisadores começam a concentrar seus esforços na hibridização das heurísticas existentes no intuito de gerar novas heurísticas híbridas que reúnam as qualidades de suas heurísticas de base, buscando desta forma diminuir ou mesmo eliminar seus aspectos negativos. Neste trabalho, em um primeiro momento, são apresentados três modelos de hibridização baseados no esquema geral das Heurísticas de Busca Local, os quais são testados com as heurísticas Busca Tabu e Multidão de Partículas. Posteriormente é apresentada uma adaptação do método Colisão de Partículas, originalmente desenvolvido para problemas contínuos, onde o método Busca Tabu é utilizado como operador de exploração local e operadores de mutação são utilizados para perturbação da solução. Como resultado, este trabalho mostra que, no caso dos modelos híbridos, a natureza complementar e diferente dos métodos Busca Tabu e Multidão de Partículas, na forma como são aqui apresentados, da origem à algoritmos robustos capazes de gerar solução ótimas ou muito boas e muito menos sensíveis ao ajuste dos parâmetros de cada um dos métodos de origem. No caso do método Colisão de Partículas, o novo algorítimo é capaz de atenuar a sensibilidade ao ajuste dos parâmetros e de evitar os processos cíclicos do método Busca Tabu, produzindo assim melhores resultados.
Resumo:
In this paper, the architectures of three degrees of freedom (3-DoF) spatial, fully parallel manipulators (PMs), whose limbs are structurally identical, are obtained systematically. To do this, the methodology followed makes use of the concepts of the displacement group theory of rigid body motion. This theory works with so-called 'motion generators'. That is, every limb is a kinematic chain that produces a certain type of displacement in the mobile platform or end-effector. The laws of group algebra will determine the actual motion pattern of the end-effector. The structural synthesis is a combinatorial process of different kinematic chains' topologies employed in order to get all of the 3-DoF motion pattern possibilities in the end-effector of the fully parallel manipulator.