12 resultados para Slot-based task-splitting algorithms
em Brock University, Canada
Resumo:
Nonsuicidal self-injury (NSSI), which refers to the direct and deliberate destruction of bodily tissue in the absence of suicidal intent, is a serious and widespread mental health concern. Although NSSI has been differentiated from suicidal behavior on the basis of non-lethal intent, research has shown that these two behaviors commonly co-occur. Despite increased research on the link between NSSI and suicidal behavior, however, little attention has been given as to why these two behaviors are associated. My doctoral dissertation specifically addressed this gap in the literature by examining the link between NSSI and several measures of suicidal risk (e.g., suicidal ideation, suicidal attempts, pain tolerance) among a large sample of young adults. The primary goal of my doctoral research was to identify individuals who engaged in NSSI at risk for suicidal ideation and attempts, in an effort to elucidate the processes through which psychosocial risk, NSSI, and suicidal risk may be associated. Participants were drawn from a larger sample of 1153 undergraduate students (70.3% female) at a mid-sized Canadian University. In study one, I examined whether increases in psychosocial risk and suicidal ideation were associated with changes in NSSI engagement over a one year period. Analyses revealed that beginners, relapsed injurers, and persistent injurers were differentiated from recovered injurers and desisters by increases in psychsocial risk and suicidal ideation over time. In study two, I examined whether several NSSI characteristics (e.g., frequency, number of methods) were associated with suicidal risk using latent class analysis. Three subgroups of individuals were identified: 1) an infrequent NSSI/not high risk for suicidal behavior group, 2) a frequent NSSI/not high risk for suicidal behavior group, and 3) a frequent NSSI/high risk for suicidal behavior group. Follow-up analyses indicated that individuals in the frequent NSSI/high risk for suicidal behavior group met the clinical cutoff score for high suicidal risk and reported significantly greater levels of suicidal ideation, attempts, and risk for future suicidal behavior as compared to the other two classes. Class 3 was also differentiated by higher levels of psychosocial risk (e.g., depressive symptoms, social anxiety) relative to the other two classes, as well as a comparison group of non-injuring young adults. Finally, in study three, I examined whether NSSI was associated with pain tolerance in a lab-based task, as tolerance to pain has been shown to be a strong predictor of suicidal risk. Individuals who engaged in NSSI to regulate the need to self-punish, tolerated pain longer than individuals who engaged in NSSI but not to self-punish and a non-injuring comparison group. My findings offer new insight into the associations among psychosocial risk, NSSI, and suicidal risk, and can serve to inform intervention efforts aimed at individuals at high risk for suicidal behavior. More specifically, my findings provide clinicians with several NSSI-specific risk factors (e.g., frequent self-injury, self-injuring alone, self-injuring to self-punish) that may serve as important markers of suicidal risk among individuals engaging in NSSI.
Resumo:
Although much research has explored computer mediated communication for its application in second language instruction, there still exists a need for empirical results from research to guide practitioners who wish to introduce web-based activities into their instruction. This study was undertaken to explore collaborative online task-based activities for the instruction of ESL academic writing. Nine ESL students in their midtwenties, enrolled at a community college in Ontario, engaged in two separate online prewriting activities in both a synchronous and an asynchronous environment. The students were interviewed in order to explore their perceptions of how the activities affected the generation and organization of ideas for academic essays. These interviews were triangulated with examples of the students' online writing, nonparticipatory observations of the students' interactions, and a discussion with the course instructor. The results of the study reveal that a small majority of students felt that brainstorming in writing with their peers in an asynchronous online discussion created a grammatical and lexical framework that supported idea generation and organization. The students did not feel that the synchronous chat activity was as successful. Although they felt that this activity also contributed to the generation of ideas, synchronous chat introduced a level of difficulty in communication that hindered the students' engagement in the task and failed to assist them with the organization of their ideas. The students also noted positive aspects of the web-based activities that were not related to prewriting tasks, for example, improved typing and word processing skills. Directions for future research could explore whether online prewriting activities can assist students in the creation of essays that are syntactically or lexically complex.
Resumo:
Objective: Overuse injuries in violinists are a problem that has been primarily analyzed through the use of questionnaires. Simultaneous 3D motion analysis and EMG to measure muscle activity has been suggested as a quantitative technique to explore this problem by identifying movement patterns and muscular demands which may predispose violinists to overuse injuries. This multi-disciplinary analysis technique has, so far, had limited use in the music world. The purpose of this study was to use it to characterize the demands of a violin bowing task. Subjects: Twelve injury-free violinists volunteered for the study. The subjects were assigned to a novice or expert group based on playing experience, as determined by questionnaire. Design and Settings: Muscle activity and movement patterns were assessed while violinists played five bowing cycles (one bowing cycle = one down-bow + one up-bow) on each string (G, D, A, E), at a pulse of 4 beats per bow and 100 beats per minute. Measurements: An upper extremity model created using coordinate data from markers placed on the right acromion process, lateral epicondyle of the humerus and ulnar styloid was used to determine minimum and maximum joint angles, ranges of motion (ROM) and angular velocities at the shoulder and elbow of the bowing arm. Muscle activity in right anterior deltoid, biceps brachii and triceps brachii was assessed during maximal voluntary contractions (MVC) and during the playing task. Data were analysed for significant differences across the strings and between experience groups. Results: Elbow flexion/extension ROM was similar across strings for both groups. Shoulder flexion/extension ROM increaslarger for the experts. Angular velocity changes mirrored changes in ROM. Deltoid was the most active of the muscles assessed (20% MVC) and displayed a pattern of constant activation to maintain shoulder abduction. Biceps and triceps were less active (4 - 12% MVC) and showed a more periodic 'on and off pattern. Novices' muscle activity was higher in all cases. Experts' muscle activity showed a consistent pattern across strings, whereas the novices were more irregular. The agonist-antagonist roles of biceps and triceps during the bowing motion were clearly defined in the expert group, but not as apparent in the novice group. Conclusions: Bowing movement appears to be controlled by the shoulder rather than the elbow as shoulder ROM changed across strings while elbow ROM remained the same. Shoulder injuries are probably due to repetition as the muscle activity required for the movement is small. Experts require a smaller amount of muscle activity to perform the movement, possibly due to more efficient muscle activation patterns as a result of practice. This quantitative multidisciplinary approach to analysing violinists' movements can contribute to fuller understanding of both playing demands and injury mechanisms .
Resumo:
This study assessed the usefulness of a cognitive behavior modification (CBM) intervention package with mentally retarded students in overcoming learned helplessness and improving learning strategies. It also examined the feasibility of instructing teachers in the use of such a training program for a classroom setting. A modified single subject design across individuals was employed using two groups of three subjects. Three students from each of two segregated schools for the mentally retarded were selected using a teacher questionnaire and pupil checklist of the most learned helpless students enrolled there. Three additional learned helplessness assessments were conducted on each subject before and after the intervention in order to evaluate the usefulness of the program in alleviating learned helplessness. A classroom environment was created with the three students from each school engaged in three twenty minute work sessions a week with the experimenter and a tutor experimenter (TE) as instructors. Baseline measurements were established on seven targeted behaviors for each subject: task-relevant speech, task-irrelevant speech, speech denoting a positive evaluation of performance, speech denoting a negative evaluation of performance, proportion of time on task, non-verbal positive evaluation of performance and non-verbal negative evaluation of performance. The intervention package combined a variety of CBM techniques such as Meichenbaum's (1977) Stop, Look and Listen approach, role rehearsal and feedback. During the intervention each subject met with his TE twice a week for an individual half-hour session and one joint twenty minute session with all three students, the experimentor and one TE. Five weeks after the end of this experiment one follow up probe was conducted. All baseline, post-intervention and probe sessions were videotaped. The seven targeted behaviors were coded and comparisons of baseline, post intervention, and probe testing were presented in graph form. Results showed a reduction in learned helplessness in all subjects. Improvement was noted in each of the seven targeted behaviors for each of the six subjects. This study indicated that mentally retarded children can be taught to reduce learned helplessness with the aid of a CBM intervention package. It also showed that CBM is a viable approach in helping mentally retarded students acquire more effective learning strategies. Because the TEs (Tutor experimenters) had no trouble learning and implementing this program, it was considered feasible for teachers to use similar methods in the classroom.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Age-related differences in information processing have often been explained through deficits in older adults' ability to ignore irrelevant stimuli and suppress inappropriate responses through inhibitory control processes. Functional imaging work on young adults by Nelson and colleagues (2003) has indicated that inferior frontal and anterior cingulate cortex playa key role in resolving interference effects during a delay-to-match memory task. Specifically, inferior frontal cortex appeared to be recruited under conditions of context interference while the anterior cingulate was associated with interference resolution at the stage of response selection. Related work has shown that specific neural activities related to interference resolution are not preserved in older adults, supporting the notion of age-related declines in inhibitory control (Jonides et aI., 2000, West et aI., 2004b). In this study the time course and nature of these inhibition-related processes were investigated in young and old adults using high-density ERPs collected during a modified Sternberg task. Participants were presented with four target letters followed by a probe that either did or did not match one of the target letters held in working memory. Inhibitory processes were evoked by manipulating the nature of cognitive conflict in a particular trial. Conflict in working memory was elicited through the presentation of a probe letter in immediately previous target sets. Response-based conflict was produced by presenting a negative probe that had just been viewed as a positive probe on the previous trial. Younger adults displayed a larger orienting response (P3a and P3b) to positive probes relative to a non-target baseline. Older adults produced the orienting P3a and 3 P3b waveforms but their responses did not differentiate between target and non-target stimuli. This age-related change in response to targetness is discussed in terms of "early selection/late correction" models of cognitive ageing. Younger adults also showed a sensitivity in their N450 response to different levels of interference. Source analysis of the N450 responses to the conflict trials of younger adults indicated an initial dipole in inferior frontal cortex and a subsequent dipole in anterior cingulate cortex, suggesting that inferior prefrontal regions may recruit the anterior cingulate to exert cognitive control functions. Individual older adults did show some evidence of an N450 response to conflict; however, this response was attenuated by a co-occurring positive deflection in the N450 time window. It is suggested that this positivity may reflect a form of compensatory activity in older adults to adapt to their decline in inhibitory control.
Resumo:
A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.