9 resultados para Anisotropic Analytical Algorithm
em Brock University, Canada
Resumo:
The purpose of this meta-analytic investigation was to review the empirical evidence specific to the effect of physical activity context on social physique anxiety (SP A). English language studies were located from computer and manual literature searches. A total of 146 initial studies were coded. Studies included in the meta-analysis presented at least one empirical effect for SPA between physical activity participants (i.e., athletes or exercisers) and non-physical activity participants. The final sample included thirteen studies, yielding 14 effect sizes, with a total sample size of 2846. Studies were coded for mean SPA between physical activity participants and non-physical activity participants. Moderator variables related to demographic and study characteristics were also coded. Using Hunter and Schmidt's (2004) protocol, statistical artifacts were corrected. Results indicate that, practically speaking, those who were physically active reported lower levels of SPA than the comparison group (dcorr = -.12; SDeorr.-=-;22). Consideration of the magnitude of the ES, the SDeorr, and confidence interval suggests that this effect is not statistically significant. While most moderator analyses reiterated this trend, some differences were worth noting. Previous research has identified SPA to be especially salient for females compared to males, however, in the current investigation, the magnitude of the ES' s comparing physical activity participants to the comparison group was similar (deorr = -.24 for females and deorr = -.23 for males). Also, the type of physical activity was investigated, and results showed that athletes reported lower levels of SP A than the comparison group (deorr = -.19, SDeorr = .08), whereas exercisers reported higher levels of SPA than the comparison group (deorr = .13, SDeorr = .22). Results demonstrate support for the dispositional nature of SP A. Consideration of practical significance suggests that those who are involved in physical activity may experience slightly lower levels of SPA than those not reporting physical activity participation. Results potentially offer support for the bi-directionality of the relationship between physical activity and SP A; however, a causality may not be inferred. More information about the type of physical activity (i.e., frequency/nature of exercise behaviour, sport classificationllevel of athletes) may help clarify the role of physical activity contexts on SPA.
Resumo:
An analytical model for bacterial accumulation in a discrete fractllre has been developed. The transport and accumlllation processes incorporate into the model include advection, dispersion, rate-limited adsorption, rate-limited desorption, irreversible adsorption, attachment, detachment, growth and first order decay botl1 in sorbed and aqueous phases. An analytical solution in Laplace space is derived and nlln1erically inverted. The model is implemented in the code BIOFRAC vvhich is written in Fortran 99. The model is derived for two phases, Phase I, where adsorption-desorption are dominant, and Phase II, where attachment-detachment are dominant. Phase I ends yvhen enollgh bacteria to fully cover the substratllm have accllillulated. The model for Phase I vvas verified by comparing to the Ogata-Banks solution and the model for Phase II was verified by comparing to a nonHomogenous version of the Ogata-Banks solution. After verification, a sensitiv"ity analysis on the inpllt parameters was performed. The sensitivity analysis was condllcted by varying one inpllt parameter vvhile all others were fixed and observing the impact on the shape of the clirve describing bacterial concentration verSllS time. Increasing fracture apertllre allovvs more transport and thus more accllffilliation, "Vvhich diminishes the dllration of Phase I. The larger the bacteria size, the faster the sllbstratum will be covered. Increasing adsorption rate, was observed to increase the dllration of Phase I. Contrary to the aSSllmption ofllniform biofilm thickness, the accllffilliation starts frOll1 the inlet, and the bacterial concentration in aqlleous phase moving towards the olitiet declines, sloyving the accumulation at the outlet. Increasing the desorption rate, redllces the dliration of Phase I, speeding IIp the accllmlilation. It was also observed that Phase II is of longer duration than Phase I. Increasing the attachment rate lengthens the accliffililation period. High rates of detachment speeds up the transport. The grovvth and decay rates have no significant effect on transport, althollgh increases the concentrations in both aqueous and sorbed phases are observed. Irreversible adsorption can stop accllillulation completely if the vallIes are high.
Resumo:
A high performance liquid chromatographic method employing two columns connected in series and separated~y·a.switching valve has been developed for the analysis of the insecticide/ nematicide oxamyl (methyl-N' ,N'-dimethyl-N-[(methylcarbamoyl) oxy]-l-thiooxarnimidate) and two of its metabolites. A variation of this method involving two reverse phase columns was employed to monitor the persistence and translocation of oxamyl in treated peach seedlings. It was possible to simultaneously analyse for oxamyl and its corresponding oxime (methyl-N',N'-dimethyl-N-hydroxy-l-thiooxamimidate}, a major metabolite of oxamyl in plants, without prior cleanup of the samples. The method allowed detection of 0.058 pg oxamyl and 0.035 p.g oxime. On treated peach leaves oxamyl was found to dissipate rapidly during the first two-week period, followed by a period of slow decomposition. Movement of oxamyl or its oxime did not occur in detectable quantities to untreated leaves or to the root or soil. A second variation of the method which employed a size exclusion column as·the first column and a reverse phase column as the second was used to monitor the degradation of oxamyl in treated, planted corn seeds and was suitable for simultaneous analysis of oxamyl, its oxime and dimethylcyanoformamide (DMCF), a metabolite of oxamyl. The method allowed detection of 0.02 pg oxamyl, 0.02 p.g oxime and 0.005 pg DMCF. Oxamyl was found to persist for a period of 5 - 6 weeks, which is long enough to permit oxamyl seedtreatment to be considered as a potential means of protecting young corn plants from nematode attack. Decomposition was found to be more rapid in unsterilized soil than in sterililized soil. DMCF was found to have a nematostatic effect at high concentrations ( 2,OOOpprn), but at lower concentrations no effect on nematode mobility was observed. Oxamyl, on the other hand, was found to reduce the mobility of nematodes at concentrations down to 4 ppm.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.