14 resultados para Complex network. Optimal path. Optimal path cracks
em Brock University, Canada
Resumo:
View of the Complex and the path running along it from the east.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks
Resumo:
Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.
Resumo:
Path running next to the Mackenzie Chown Complex.
Resumo:
The KCube interconnection network was first introduced in 2010 in order to exploit the good characteristics of two well-known interconnection networks, the hypercube and the Kautz graph. KCube links up multiple processors in a communication network with high density for a fixed degree. Since the KCube network is newly proposed, much study is required to demonstrate its potential properties and algorithms that can be designed to solve parallel computation problems. In this thesis we introduce a new methodology to construct the KCube graph. Also, with regard to this new approach, we will prove its Hamiltonicity in the general KC(m; k). Moreover, we will find its connectivity followed by an optimal broadcasting scheme in which a source node containing a message is to communicate it with all other processors. In addition to KCube networks, we have studied a version of the routing problem in the traditional hypercube, investigating this problem: whether there exists a shortest path in a Qn between two nodes 0n and 1n, when the network is experiencing failed components. We first conditionally discuss this problem when there is a constraint on the number of faulty nodes, and subsequently introduce an algorithm to tackle the problem without restrictions on the number of nodes.
Resumo:
by Ilan Averbuch presented to Brock in 1988.
Resumo:
Optimal challenge occurs when an individual perceives the challenge of the task to be equaled or matched by his or her own skill level (Csikszentmihalyi, 1990). The purpose of this study was to test the impact of the OPTIMAL model on physical education students' motivation and perceptions of optimal challenge across four games categories (i. e. target, batting/fielding, net/wall, invasion). Enjoyment, competence, student goal orientation and activity level were examined in relation to the OPTIMAL model. A total of 22 (17 M; 5 F) students and their parents provided informed consent to take part in the study and were taught four OPTIMAL lessons and four non-OPTIMAL lessons ranging across the four different games categories by their own teacher. All students completed the Task and Ego in Sport Questionnaire (TEOSQ; Duda & Whitehead, 1998), the Intrinsic Motivation Inventory (IMI; McAuley, Duncan, & Tanmien, 1987) and the Children's Perception of Optimal Challenge Instrument (CPOCI; Mandigo, 2001). Sixteen students (two each lesson) were observed by using the System for Observing Fitness Instruction Time tool (SOFTT; McKenzie, 2002). As well, they participated in a structured interview which took place after each lesson was completed. Quantitative results concluded that no overall significant difference was found in motivational outcomes when comparing OPTIMAL and non-OPTIMAL lessons. However, when the lessons were broken down into games categories, significant differences emerged. Levels of perceived competence were found to be higher in non-OPTIMAL batting/fielding lessons compared to OPTIMAL lessons, whereas levels of enjoyment and perceived competence were found to be higher in OPTIMAL invasion lessons in comparison to non-OPTIMAL invasion lessons. Qualitative results revealed significance in feehngs of skill/challenge balance, enjoyment and competence in the OPTIMAL lessons. Moreover, a significance of practically twice the active movement time percentage was found in OPTIMAL lessons in comparison to non-OPTIMAL lessons.
Resumo:
The mechanism whereby cytochrome £ oxidase catalyses elec-. tron transfer from cytochrome £ to oxygen remains an unsolved problem. Polarographic and spectrophotometric activity measurements of purified, particulate and soluble forms of beef heart mitochondrial cytochrome c oxidase presented in this thesis confirm the following characteristics of the steady-state kinetics with respect to cytochrome £: (1) oxidation of ferrocytochrome c is first order under all conditions. -(2) The relationship between sustrate concentration and velocity is of the Michaelis-Menten type over a limited range of substrate. concentrations at high ionic strength. (3) ~he reaction rate is independent from oxygen concentration until very low levels of oxygen. (4) "Biphasic" kinetic plots of enzyme activity as a function of substrate concentration are found when the range of cytochrome c concentrations is extended; the biphasicity ~ is more apparent in low ionic strength buffer. These results imply two binding sites for cytochrome £ on the oxidase; one of high affinity and one of low affinity with Km values of 1.0 pM and 3.0 pM, respectively, under low ionic strength conditions. (5) Inhibition of the enzymic rate by azide is non-c~mpetitive with respect to cytochrome £ under all conditions indicating an internal electron transfer step, and not binding or dissociation of £ from the enzyme is rate limiting. The "tight" binding of cytochrome '£ to cytochrome c oxidase is confirmed in column chromatographic experiments. The complex has a cytochrome £:oxidase ratio of 1.0 and is dissociated in media of high ionic strength. Stopped-flow spectrophotometric studies of the reduction of equimolar mixtures and complexes of cytochrome c and the oxidase were initiated in an attempt to assess the functional relevance of such a complex. Two alternative routes -for reduction of the oxidase, under conditions where the predominant species is the £ - aa3 complex, are postulated; (i) electron transfer via tightly bound cytochrome £, (ii) electron transfer via a small population of free cytochrome c interacting at the "loose" binding site implied from kinetic studies. It is impossible to conclude, based on the results obtained, which path is responsible for the reduction of cytochrome a. The rate of reduction by various reductants of free cytochrome £ in high and low ionic strength and of cytochrome £ electrostatically bound to cytochrome oxidase was investigated. Ascorbate, a negatively charged reagent, reduces free cytochrome £ with a rate constant dependent on ionic strength, whereas neutral reagents TMPD and DAD were relatively unaffected by ionic strength in their reduction of cytochrome c. The zwitterion cysteine behaved similarly to uncharged reductants DAD and TI~PD in exhibiting only a marginal response to ionic strength. Ascorbate reduces bound cytochrome £ only slowly, but DAD and TMPD reduce bound cytochrome £ rapidly. Reduction of cytochrome £ by DAD and TMPD in the £ - aa3 complex was enhanced lO-fold over DAD reduction of free £ and 4-fold over TMPD reduction of free c. Thus, the importance of ionic strength on the reactivity of cytochrome £ was observed with the general conclusion being that on the cytochrome £ molecule areas for anion (ie. phosphate) binding, ascorbate reduction and complexation to the oxidase overlap. The increased reducibility for bound cytochrome £ by reductants DAD and TMPD supports a suggested conformational change of electrostatically bound c compare.d to free .£. In addition, analysis of electron distribution between cytochromes £ and a in the complex suggest that the midpotential of cytochrome ~ changes with the redox state of the oxidase. Such evidence supports models of the oxidase which suggest interactions within the enzyme (or c - enzyme complex) result in altered midpoint potentials of the redox centers.
Resumo:
Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.