18 resultados para simple algorithms
em Brock University, Canada
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.
Resumo:
We examined three different algorithms used in diffusion Monte Carlo (DMC) to study their precisions and accuracies in predicting properties of isolated atoms, which are H atom ground state, Be atom ground state and H atom first excited state. All three algorithms — basic DMC, minimal stochastic reconfiguration DMC, and pure DMC, each with future-walking, are successfully impletmented in ground state energy and simple moments calculations with satisfactory results. Pure diffusion Monte Carlo with future-walking algorithm is proven to be the simplest approach with the least variance. Polarizabilities for Be atom ground state and H atom first excited state are not satisfactorily estimated in the infinitesimal differentiation approach. Likewise, an approach using the finite field approximation with an unperturbed wavefunction for the latter system also fails. However, accurate estimations for the a-polarizabilities are obtained by using wavefunctions that come from the time-independent perturbation theory. This suggests the flaw in our approach to polarizability estimation for these difficult cases rests with our having assumed the trial function is unaffected by infinitesimal perturbations in the Hamiltonian.
Resumo:
The work presented in this thesis is divided into three separate sections 4!> Each' 'section is involved wi th a different problem, however all three are involved with a microbial oxidation of a substrate~ A series of 'aryl substituted phenyl a.nd be,nzyl methyl sulphides were oxidized to the corre~pondi~g sulphoxides by 'Mo:rtierellai's'a'b'e'llina NRR.L17'S7 @ For this enzymic Qxidation, based on 180 labeled experiments, the oxygen atom is derived fr'orn the atmosphere and not from water. By way of an u~.traviolet analysis, the rates of oxidation, in terms of sulphox~ de appearance, were obtained and correlated with the Hatnmett p s~grna constants for the phenyl methyl sulphide series. A value of -0.67 was obtained and, is interpreted in terms of a mechanism of oxidation that involves an electrophilic attack on the sulphide sulphur by an enzymic ironoxygen activated complex and the conversion of the resulti!lg sulphur cation to sulphoxide. A series of alkyl phenyl selen~des have been incubated with the fu~gi, Aspergillus niger ATCC9l42, Aspergillus fO'etidus NRRL 337, MIIJisabellina NF.RLl757 and'He'lminth'osparium sp'ecies NRRL 4671 @l These fu?gi have been reported to be capable of carrying out the efficient oxidation of sulphide to sulphoxide, but in no case was there any evidence to supp'ort the occurrence of a microbialox,idation. A more extensive inves·t~gation was carried out with'M,e 'i's'a'b'e'l'l'i'na, this fu~gus was capable of oxidizing the correspondi~g sulphides to sulphoxi.de·s·$ Usi:ng a 1abel.edsubstra.te, [Methyl-l4c]-methyl phenyl selenide, the fate of this compound was invest~gated followi!lg an i'ncubation wi th Me isabellina .. BeSUldes th. e l4C-ana1YS1Q S-,'. a quant"ltta"lve selen'lum ana1Y"S1S was carried out with phenyl methyl selenide. These techniques indicate that thesel'enium was capable of enteri!1g thefu!1gal cell ef'ficiently but that s'ome metabolic cleav~ge of the seleni'um-carbon bond' may take plac'e Ie The l3c NMR shifts were assigned to the synthesized alkyl phenyl sulphides and selenides@ The final section involved the incubation ofethylben~ zene and p-e:rtr.hyltoluene wi th'M ~ 'isab'e'llina NRRL 17574b Followi~ g this incubation an hydroxylated product was isolated from the medium. The lH NMR and mass spectral data identify the products as I-phenylethanol and p-methyl-l-phenylethanol. Employi!lg a ch'iral shift re~gent,tri~ (3-heptafluorobutyl-dcamphorato)'- europium III, the enantiomeric puri ty of these products was invest~gated. An optical rotation measurement of I-phenylethanol was in ~greement with the results obtained with the chiral shift re~gen,te 'M.isabe'l'lina is capable of carryi~g out an hydroxylation of ethylbenzene and p-ethyltoluene at the ~ position.
Resumo:
Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.
Resumo:
Sleep spindles have been found to increase following an intense period of learning on a combination of motor tasks. It is not clear whether these changes are task specific, or a result of learning in general. The current study investigated changes in sleep spindles and spectral power following learning on cognitive procedural (C-PM), simple procedural (S-PM) or declarative (DM) learning tasks. It was hypothesized that S-PM learning would result in increases in Sigma power during Non-REM sleep, whereas C-PM and DM learning would not affect Sigma power. It was also hypothesized that DM learning would increase Theta power during REM sleep, whereas S-PM and C-PM learning would not affect Theta power. Thirty-six participants spent three consecutive nights in the sleep laboratory. Baseline polysomnographic recordings were collected on night 2. Participants were randomly assigned to one of four conditions: C-PM, S-PM, DM or control (C). Memory task training occurred on night 3 followed by polysomnographic recording. Re-testing on respective memory tasks occurred one-week following training. EEG was sampled at 256Hz from 16 sites during sleep. Artifact-free EEG from each sleep stage was submitted to power spectral analysis. The C-PM group made significantly fewer errors, the DM group recalled more, and the S-PM improved on performance from test to re-test. There was a significant night by group interaction for the duration of Stage 2 sleep. Independent t-tests revealed that the S-PM group had significantly more Stage 2 sleep on the test night than the C group. The C-PM and the DM group did not differ from controls in the duration of Stage 2 sleep on test night. There was no significant change in the duration of slow wave sleep (SWS) or REM sleep. Sleep spindle density (spindles/minute) increased significantly from baseline to test night following S-PM learning, but not for C-PM, DM or C groups. This is the first study to have shown that the same pattern of results was found for spindles in SWS. Low Sigma power (12-14Hz) increased significantly during SWS following S-PM learning but not for C-PM, DM or C groups. This effect was maximal at Cz, and the largest increase in Sigma power was at Oz. It was also found that Theta power increased significantly during REM sleep following DM learning, but not for S-PM, C-PM or C groups. This effect was maximal at Cz and the largest change in Theta power was observed at Cz. These findings are consistent with the previous research that simple procedural learning is consolidated during Stage 2 sleep, and provide additional data to suggest that sleep spindles across all non-REM stages and not just Stage 2 sleep may be a mechanism for brain plasticity. This study also provides the first evidence to suggest that Theta activity during REM sleep is involved in memory consolidation.
Resumo:
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.
Resumo:
Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.
Resumo:
The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.
Resumo:
The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
Hub Location Problems play vital economic roles in transportation and telecommunication networks where goods or people must be efficiently transferred from an origin to a destination point whilst direct origin-destination links are impractical. This work investigates the single allocation hub location problem, and proposes a genetic algorithm (GA) approach for it. The effectiveness of using a single-objective criterion measure for the problem is first explored. Next, a multi-objective GA employing various fitness evaluation strategies such as Pareto ranking, sum of ranks, and weighted sum strategies is presented. The effectiveness of the multi-objective GA is shown by comparison with an Integer Programming strategy, the only other multi-objective approach found in the literature for this problem. Lastly, two new crossover operators are proposed and an empirical study is done using small to large problem instances of the Civil Aeronautics Board (CAB) and Australian Post (AP) data sets.
Resumo:
The KCube interconnection topology was rst introduced in 2010. The KCube graph is a compound graph of a Kautz digraph and hypercubes. Compared with the at- tractive Kautz digraph and well known hypercube graph, the KCube graph could accommodate as many nodes as possible for a given indegree (and outdegree) and the diameter of interconnection networks. However, there are few algorithms designed for the KCube graph. In this thesis, we will concentrate on nding graph theoretical properties of the KCube graph and designing parallel algorithms that run on this network. We will explore several topological properties, such as bipartiteness, Hamiltonianicity, and symmetry property. These properties for the KCube graph are very useful to develop efficient algorithms on this network. We will then study the KCube network from the algorithmic point of view, and will give an improved routing algorithm. In addition, we will present two optimal broadcasting algorithms. They are fundamental algorithms to many applications. A literature review of the state of the art network designs in relation to the KCube network as well as some open problems in this field will also be given.
Resumo:
Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.