20 resultados para Prediction algorithms
em Brock University, Canada
Resumo:
This study examined the effect of expHcitly instructing students to use a repertoire of reading comprehension strategies. Specifically, this study examined whether providing students with a "predictive story-frame" which combined the use of prediction and summarization strategies improved their reading comprehension relative to providing students with generic instruction on prediction and summarization. Results were examined in terms of instructional condition and reading ability. Students from 2 grade 4 classes participated in this study. The reading component of the Canadian Achievement Tests, Second Edition (CAT/2) was used to identify students as either "average or above average" or "below average" readers. Students received either strategic predication and summarization instruction (story-frame) or generic prediction and summarization instruction (notepad). Students were provided with new but comparable stories for each session. For both groups, the researcher modelled the strategic tools and provided guided practice, independent practice, and independent reading sessions. Comprehension was measured with an immediate and 1-week delayed comprehension test for each of the 4 stories, hi addition, students participated in a 1- week delayed interview, where they were asked to retell the story and to answer questions about the central elements (character, setting, problem, solution, beginning, middle, and ending events) of each story. There were significant differences, with medium to large effect sizes, in comprehension and recall scores as a fimction of both instructional condition and reading ability. Students in the story-frame condition outperformed students in the notepad condition, and average to above average readers performed better than below average readers. Students in the story-frame condition outperformed students in the notepad condition on the comprehension tests and on the oral retellings when teacher modelling and guidance were present. In the cued recall sessions, students in the story-frame instructional condition recalled more correct information and generated fewer errors than students in the notepad condition. Average to above average readers performed better than below average readers across comprehension and retelling measures. The majority of students in both instructional conditions reported that they would use their strategic tool again.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.
Resumo:
Personality traits and personal values are two important domains of individual differences. Traits are enduring and distinguishable patterns of behaviour whereas values are societally taught, stable, individual preferences that guide behaviour in order to reach a specific end state. The purpose of the present study was to investigate the relations between self and peer report within the domains of personality traits and values, to examine the correlations between values and traits, and to explore the amount of incremental validity of traits and values in predicting behaviour. Two hundred and fiftytwo men and women from a university setting completed self and peer reports on three questionnaires. In order to assess personality traits, the HEXACO-PI (Lee & Ashton, 2004) was used to identify levels of 6 major dimensions of personality in participants. To assess values, the Schwartz Value Survey (Schwartz, 1992) was used to identify the importance each participant placed on each of Schwartz's 10 value types. To measure behaviour, a Behavior Scale, created by Bardi and Schwartz (2003), consisting of items designed to measure the frequency of value-expressive behaviour was used. As expected, correlations between self and peer reports for the personality scales were high indicating that personality traits are easily observable to other people. Correlations between self and peer reports for the values and behaviour scales were only moderate, suggesting that some goals, and behaviours expressive of those goals, may not always be observable to others. Consistent with previous research, there were many strong correlations between traits and values. In addition to the similarities with past research, the present study found that the personality factor Honesty-Humility was correlated strongly with values scales (with five correlations exceeding .25). In the prediction of behaviour, it was found that both personahty and values were able to account for significant and similar amounts of variance. Personality outpredicted values for some behaviours, but the opposite was true of other behaviours. Each domain provided incremental validity beyond the other domain. The impUcations for these findings, along with limitations, and possibilities for future research are also discussed.
Resumo:
In studies of cognitive processing, the allocation of attention has been consistently linked to subtle, phasic adjustments in autonomic control. Both autonomic control of heart rate and control of the allocation of attention are known to decline with age. It is not known, however, whether characteristic individual differences in autonomic control and the ability to control attention are closely linked. To test this, a measure of parasympathetic function, vagal tone (VT) was computed from cardiac recordings from older and younger adults taken before and during performance of two attentiondemanding tasks - the Eriksen visual flanker task and the source memory task. Both tasks elicited event-related potentials (ERPs) that accompany errors, i.e., error-related negativities (ERNs) and error positivities (Pe's). The ERN is a negative deflection in the ERP signal, time-locked to responses made on incorrect trials, likely generated in the anterior cingulate. It is followed immediately by the Pe, a broad, positive deflection which may reflect conscious awareness of having committed an error. Age-attenuation ofERN amplitude has previously been found in paradigms with simple stimulus-response mappings, such as the flanker task, but has rarely been examined in more complex, conceptual tasks. Until now, there have been no reports of its being investigated in a source monitoring task. Age-attenuation of the ERN component was observed in both tasks. Results also indicated that the ERNs generated in these two tasks were generally comparable for young adults. For older adults, however, the ERN from the source monitoring task was not only shallower, but incorporated more frontal processing, apparently reflecting task demands. The error positivities elicited by 3 the two tasks were not comparable, however, and age-attenuation of the Pe was seen only in the more perceptual flanker task. For younger adults, it was Pe scalp topography that seemed to reflect task demands, being maximal over central parietal areas in the flanker task, but over very frontal areas in the source monitoring task. With respect to vagal tone, in the flanker task, neither the number of errors nor ERP amplitudes were predicted by baseline or on-task vagal tone measures. However, in the more difficult source memory task, lower VT was marginally associated with greater numbers of source memory errors in the older group. Thus, for older adults, relatively low levels of parasympathetic control over cardiac response coincided with poorer source memory discrimination. In both groups, lower levels of baseline VT were associated with larger amplitude ERNs, and smaller amplitude Pe's. Thus, low VT was associated in a conceptual task with a greater "emergency response" to errors, and at the same time, reduced awareness of having made them. The efficiency of an individual's complex cognitive processing was therefore associated with the flexibility of parasympathetic control of heart rate, in response to a cognitively challenging task.
Resumo:
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.
Resumo:
Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.
Resumo:
The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.
Resumo:
The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
Mathematical predictions of flow conditions along a steep gradient rock bedded stream are examined. Stream gage discharge data and Manning's Equation are used to calculate alternative velocities, and subsequently Froude Numbers, assuming varying values of velocity coefficient, full depth or depth adjusted for vertical flow separation. Comparison of the results with photos show that Froude Numbers calculated from velocities derived from Manning's Equation, assuming a velocity coefficient of 1.30 and full depth, most accurately predict flow conditions, when supercritical flow is defined as Froude Number values above 0.84. Calculated Froude Number values between 0.8 and 1.1 correlate well with observed transitional flow, defined as the first appearance of small diagonal waves. Transitions from subcritical through transitional to clearly supercritical flow are predictable. Froude Number contour maps reveal a sinuous rise and fall of values reminiscent of pool riffle energy distribution.
Resumo:
This thesis describes an ancillary project to the Early Diagnosis of Mesothelioma and Lung Cancer in Prior Asbestos Workers study and was conducted to determine the effects of asbestos exposure, pulmonary function and cigarette smoking in the prediction of pulmonary fibrosis. 613 workers who were occupationally exposed to asbestos for an average of 25.9 (SD=14.69) years were sampled from Sarnia, Ontario. A structured questionnaire was administered during a face-to-face interview along with a low-dose computed tomography (LDCT) of the thorax. Of them, 65 workers (10.7%, 95%CI 8.12—12.24) had LDCT-detected pulmonary fibrosis. The model predicting fibrosis included the variables age, smoking (dichotomized), post FVC % splines and post- FEV1% splines. This model had a receiver operator characteristic area under the curve of 0.738. The calibration of the model was evaluated with R statistical program and the bootstrap optimism-corrected calibration slope was 0.692. Thus, our model demonstrated moderate predictive performance.
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
Hub Location Problems play vital economic roles in transportation and telecommunication networks where goods or people must be efficiently transferred from an origin to a destination point whilst direct origin-destination links are impractical. This work investigates the single allocation hub location problem, and proposes a genetic algorithm (GA) approach for it. The effectiveness of using a single-objective criterion measure for the problem is first explored. Next, a multi-objective GA employing various fitness evaluation strategies such as Pareto ranking, sum of ranks, and weighted sum strategies is presented. The effectiveness of the multi-objective GA is shown by comparison with an Integer Programming strategy, the only other multi-objective approach found in the literature for this problem. Lastly, two new crossover operators are proposed and an empirical study is done using small to large problem instances of the Civil Aeronautics Board (CAB) and Australian Post (AP) data sets.
Resumo:
The KCube interconnection topology was rst introduced in 2010. The KCube graph is a compound graph of a Kautz digraph and hypercubes. Compared with the at- tractive Kautz digraph and well known hypercube graph, the KCube graph could accommodate as many nodes as possible for a given indegree (and outdegree) and the diameter of interconnection networks. However, there are few algorithms designed for the KCube graph. In this thesis, we will concentrate on nding graph theoretical properties of the KCube graph and designing parallel algorithms that run on this network. We will explore several topological properties, such as bipartiteness, Hamiltonianicity, and symmetry property. These properties for the KCube graph are very useful to develop efficient algorithms on this network. We will then study the KCube network from the algorithmic point of view, and will give an improved routing algorithm. In addition, we will present two optimal broadcasting algorithms. They are fundamental algorithms to many applications. A literature review of the state of the art network designs in relation to the KCube network as well as some open problems in this field will also be given.