956 resultados para Cluster Counting Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Competitividad y valor compartido

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Lennard-Jones Devonshire 1 (LJD) single particle theory for liquids is extended and applied to the anharmonic solid in a high temperature limit. The exact free energy for the crystal is expressed as a convergent series of terms involving larger and larger sets of contiguous particles called cell-clusters. The motions of all the particles within cell-clusters are correlated to each other and lead to non-trivial integrals of orders 3, 6, 9, ... 3N. For the first time the six dimensional integral has been calculated to high accuracy using a Lennard-Jones (6-12) pair interaction between nearest neighbours only for the f.c.c. lattice. The thermodynamic properties predicted by this model agree well with experimental results for solid Xenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method using L-cysteine for the determination of arsenous acid (As(III)), arsenic acid (As(V)), monomethylarsonic acid (MMAA), and dimethylarsinic acid (DMAA) by hydride generation was demonstrated. The instrument used was a d.c. plasma atomic emission spectrometer (OCP-AES). Complete recovery was reported for As(III), As(V), and DMAA while 86% recovery was reported for MMAA. Detection limits were determined, as arsenic for the species listed previously, to be 1.2, 0.8, 1.1, and 1.0 ngemL-l, respectively. Precision values, at 50 ngemL-1 arsenic concentration, were f.80/0, 2.50/0, 2.6% and 2.6% relative standard deviation, respectively. The L-cysteine reagent was compared directly with the conventional hydride generation technique which uses a potassium iodide-hydrochloric acid medium. Recoveries using L-cysteine when compared with the conventional method provided the following results: similar recoveries were obtained for As(III), slightly better recoveries were obtained for As(V) and MMAA, and significantly better recoveries for DMAA. In addition, tall and sharp peak shapes were observed for all four species when using L-cysteine. The arsenic speciation method involved separation by ion exchange .. high perfonnance liquid chromatography (HPLC) with on-line hydride generation using the L.. cysteine reagent and measurement byOCP-AES. Total analysis time per sample was 12 min while the time between the start of subsequent runs was approximately 20 min. A binary . gradient elution program, which incorporated the following two eluents: 0.01 and 0.5 mM tri.. sodium citrate both containing 5% methanol (v/v) and both at a pH of approximately 9, was used during the separation by HPLC. Recoveries of the four species which were measured as peak area, and were normalized against As(III), were 880/0, 290/0, and 40% for DMAA, MMAA and As(V), respectively. Resolution factors between adjacent analyte peaks of As(III) and DMAA was 1.1; DMAA and MMAA was 1.3; and MMAA and As(V) was 8.6. During the arsenic speciation study, signals from the d.c. plasma optical system were measured using a new photon-signal integrating device. The_new photon integrator developed and built in this laboratory was based on a previously published design which was further modified to reflect current available hardware. This photon integrator was interfaced to a personal computer through an AID convertor. The .photon integrator has adjustable threshold settings and an adjustable post-gain device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I use "Fabricating Authenticity," a model developed in the Production of Culture Perspective, to explore the evolving criteria for judging what constitute "real" and authentic Niagara wines, along with the naturalization of these criteria, as the Canadian Niagara wine cluster has come under increasing stress from globalization. Authenticity has been identified as a hallmark of contemporary marketing and important to cultural industries, which can use it for creating meaningful differentiation; making it a renewable resource for securing consumers, increasing market value; and for relationships with key brokers. This is important as free trade and international treaties are making traditional protective barriers, like trade tariffs and markups, obsolete and as governments increasingly allocate industry support via promotion and marketing policies that are directly linked to objectives of city and regional development, which in turn carry real implications for what gets to be judged authentic and inauthentic local culture. This research uses a mixed methods research strategy, drawing upon ethnographic observation, marketing materials, newspaper reports, and secondary data to provide insight into the processes and conflicts over efforts to fabricate authenticity, comparing the periods before and after the passage of NAFT A to the present period. The Niagara wine cluster is a good case in point because it has little natural advantage nor was there a tradition of quality table wine making to facilitate the naturalization of authenticity. Geographic industrial clusters have been found particularly competitive in the global economy and the exploratory case study contributes to our understanding of the dynamic of '1abricating authenticity," building on various theoretical propositions to attempt to derive explanations of how global processes affect strategies to create "authenticity," how these strategies affect cultural homogeneity and heterogeneity at the local level, and how the concept of "cluster" contributes to the process of managing authenticity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The City of St. Catharines, located on the southern shore of Lake Ontario, is Niagara Region's only major urban node. Like many small/medium-sized cities in Canada and abroad, the city experienced a rapid decline of large-scale manufacturing in the 1990s. In a renewed attempt to recover from this economic depression, and spurred by Provincial policy, the City implemented the Downtown Creative Cluster Master Plan (DCCMP) in 2008. In this thesis I conduct a discourse analysis of the DCCMP. My analysis indicates that DCCMP is shaped by neoliberal economic development paradigms. As such it is designed to restructure the downtown into a creative cluster by attracting developers/investors and appealing to the interests, tastes, and desires of middle-class consumers and creatives. I illustrate that this competitive city approach to urban planning has a questionable track record, and has been shown to result in retail and residential gentrification and displacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of most clustering algorithms is to find the optimal number of clusters (i.e. fewest number of clusters). However, analysis of molecular conformations of biological macromolecules obtained from computer simulations may benefit from a larger array of clusters. The Self-Organizing Map (SOM) clustering method has the advantage of generating large numbers of clusters, but often gives ambiguous results. In this work, SOMs have been shown to be reproducible when the same conformational dataset is independently clustered multiple times (~100), with the help of the Cramérs V-index (C_v). The ability of C_v to determine which SOMs are reproduced is generalizable across different SOM source codes. The conformational ensembles produced from MD (molecular dynamics) and REMD (replica exchange molecular dynamics) simulations of the penta peptide Met-enkephalin (MET) and the 34 amino acid protein human Parathyroid Hormone (hPTH) were used to evaluate SOM reproducibility. The training length for the SOM has a huge impact on the reproducibility. Analysis of MET conformational data definitively determined that toroidal SOMs cluster data better than bordered maps due to the fact that toroidal maps do not have an edge effect. For the source code from MATLAB, it was determined that the learning rate function should be LINEAR with an initial learning rate factor of 0.05 and the SOM should be trained by a sequential algorithm. The trained SOMs can be used as a supervised classification for another dataset. The toroidal 10×10 hexagonal SOMs produced from the MATLAB program for hPTH conformational data produced three sets of reproducible clusters (27%, 15%, and 13% of 100 independent runs) which find similar partitionings to those of smaller 6×6 SOMs. The χ^2 values produced as part of the C_v calculation were used to locate clusters with identical conformational memberships on independently trained SOMs, even those with different dimensions. The χ^2 values could relate the different SOM partitionings to each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tesis (Doctor en Filosofía con Especialidad en Administración) UANL, 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[Français] Une fraction importante des génomes eucaryotes est constituée de Gènes Répétés en Tandem (GRT). Un mécanisme fondamental dans l’évolution des GRT est la recombinaison inégale durant la méiose, entrainant la duplication locale (en tandem) de segments chromosomiques contenant un ou plusieurs gènes adjacents. Différents algorithmes ont été proposés pour inférer une histoire de duplication en tandem pour un cluster de GRT. Cependant, leur utilisation est limitée dans la pratique, car ils ne tiennent pas compte d’autres événements évolutifs pourtant fréquents, comme les inversions, les duplications inversées et les délétions. Cette thèse propose différentes approches algorithmiques permettant d’intégrer ces événements dans le modèle de duplication en tandem classique. Nos contributions sont les suivantes: • Intégrer les inversions dans un modèle de duplication en tandem simple (duplication d’un gène à la fois) et proposer un algorithme exact permettant de calculer le nombre minimal d’inversions s’étant produites dans l’évolution d’un cluster de GRT. • Généraliser ce modèle pour l’étude d’un ensemble de clusters orthologues dans plusieurs espèces. • Proposer un algorithme permettant d’inférer l’histoire évolutive d’un cluster de GRT en tenant compte des duplications en tandem, duplications inversées, inversions et délétions de segments chromosomiques contenant un ou plusieurs gènes adjacents.