5 resultados para Complex combinatorial problem
em Brock University, Canada
Resumo:
Chronic low back pain (CLBP) is a complex health problem of psychological manifestations not fully understood. Using interpretive phenomenological analysis, 11 semi-structured interviews were conducted to help understand the meaning of the lived experience of CLBP; focusing on the psychological response to pain and the role of depression, catastrophizing, fear-avoidance behavior, anxiety and somatization. Participants characterized CLBP as persistent tolerable low back pain (TLBP) interrupted by periods of intolerable low back pain (ILBP). ILBP contributed to recurring bouts of helplessness, depression, frustration with the medical system and increased fear based on the perceived consequences of anticipated recurrences, all of which were mediated by the uncertainty of such pain. During times of TLBP all participants pursued a permanent pain consciousness as they felt susceptible to experience a recurrence. As CLBP progressed, participants felt they were living with a weakness, became isolated from those without CLBP and integrated pain into their self-concept.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
The mechanism whereby cytochrome £ oxidase catalyses elec-. tron transfer from cytochrome £ to oxygen remains an unsolved problem. Polarographic and spectrophotometric activity measurements of purified, particulate and soluble forms of beef heart mitochondrial cytochrome c oxidase presented in this thesis confirm the following characteristics of the steady-state kinetics with respect to cytochrome £: (1) oxidation of ferrocytochrome c is first order under all conditions. -(2) The relationship between sustrate concentration and velocity is of the Michaelis-Menten type over a limited range of substrate. concentrations at high ionic strength. (3) ~he reaction rate is independent from oxygen concentration until very low levels of oxygen. (4) "Biphasic" kinetic plots of enzyme activity as a function of substrate concentration are found when the range of cytochrome c concentrations is extended; the biphasicity ~ is more apparent in low ionic strength buffer. These results imply two binding sites for cytochrome £ on the oxidase; one of high affinity and one of low affinity with Km values of 1.0 pM and 3.0 pM, respectively, under low ionic strength conditions. (5) Inhibition of the enzymic rate by azide is non-c~mpetitive with respect to cytochrome £ under all conditions indicating an internal electron transfer step, and not binding or dissociation of £ from the enzyme is rate limiting. The "tight" binding of cytochrome '£ to cytochrome c oxidase is confirmed in column chromatographic experiments. The complex has a cytochrome £:oxidase ratio of 1.0 and is dissociated in media of high ionic strength. Stopped-flow spectrophotometric studies of the reduction of equimolar mixtures and complexes of cytochrome c and the oxidase were initiated in an attempt to assess the functional relevance of such a complex. Two alternative routes -for reduction of the oxidase, under conditions where the predominant species is the £ - aa3 complex, are postulated; (i) electron transfer via tightly bound cytochrome £, (ii) electron transfer via a small population of free cytochrome c interacting at the "loose" binding site implied from kinetic studies. It is impossible to conclude, based on the results obtained, which path is responsible for the reduction of cytochrome a. The rate of reduction by various reductants of free cytochrome £ in high and low ionic strength and of cytochrome £ electrostatically bound to cytochrome oxidase was investigated. Ascorbate, a negatively charged reagent, reduces free cytochrome £ with a rate constant dependent on ionic strength, whereas neutral reagents TMPD and DAD were relatively unaffected by ionic strength in their reduction of cytochrome c. The zwitterion cysteine behaved similarly to uncharged reductants DAD and TI~PD in exhibiting only a marginal response to ionic strength. Ascorbate reduces bound cytochrome £ only slowly, but DAD and TMPD reduce bound cytochrome £ rapidly. Reduction of cytochrome £ by DAD and TMPD in the £ - aa3 complex was enhanced lO-fold over DAD reduction of free £ and 4-fold over TMPD reduction of free c. Thus, the importance of ionic strength on the reactivity of cytochrome £ was observed with the general conclusion being that on the cytochrome £ molecule areas for anion (ie. phosphate) binding, ascorbate reduction and complexation to the oxidase overlap. The increased reducibility for bound cytochrome £ by reductants DAD and TMPD supports a suggested conformational change of electrostatically bound c compare.d to free .£. In addition, analysis of electron distribution between cytochromes £ and a in the complex suggest that the midpotential of cytochrome ~ changes with the redox state of the oxidase. Such evidence supports models of the oxidase which suggest interactions within the enzyme (or c - enzyme complex) result in altered midpoint potentials of the redox centers.
Resumo:
The design of a large and reliable DNA codeword library is a key problem in DNA based computing. DNA codes, namely sets of fixed length edit metric codewords over the alphabet {A, C, G, T}, satisfy certain combinatorial constraints with respect to biological and chemical restrictions of DNA strands. The primary constraints that we consider are the reverse--complement constraint and the fixed GC--content constraint, as well as the basic edit distance constraint between codewords. We focus on exploring the theory underlying DNA codes and discuss several approaches to searching for optimal DNA codes. We use Conway's lexicode algorithm and an exhaustive search algorithm to produce provably optimal DNA codes for codes with small parameter values. And a genetic algorithm is proposed to search for some sub--optimal DNA codes with relatively large parameter values, where we can consider their sizes as reasonable lower bounds of DNA codes. Furthermore, we provide tables of bounds on sizes of DNA codes with length from 1 to 9 and minimum distance from 1 to 9.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.