961 resultados para Structure learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Barnsley College’s level 3 and 4 diplomas in digital learning design are delivered in one year, enabling apprentices to be employed alongside their studies in the college’s innovative learning design company, Elephant Learning Designs. The limited time this allows for delivery and assessment has prompted course leaders to rethink their approach to course structure, assessment and feedback design, and the role of technology in evidence collection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed timevarying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible realtime term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed time-varying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible real-time term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho apresentado em PAEE/ALE’2016, 8th International Symposium on Project Approaches in Engineering Education (PAEE) and 14th Active Learning in Engineering Education Workshop (ALE)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An Interactive electronic Atlas (IeA) was developed to assist first-year nursing students with interpretation of laboratory-based prosected cadaveric material. It was designed, using pedagogically sound principles, as a student-centered resource accessible to students from a wide range of learning backgrounds. It consisted of a highly simplified interactive interface limited to essential anatomical structures and was intended for use in a blended learning situation. The IeA's nine modules mirrored the body systems covered in a Nursing Biosciences course, with each module comprising a maximum of 10 pages using the same template: an image displaying a cadaveric specimen and, in most cases, a corresponding anatomical model with navigation panes (menus) on one side. Cursor movement over the image or clicking the menu highlighted the structure with a transparent overlay and revealed a succinct functional description. The atlas was complemented by a multiple-choice database of nearly 1,000 questions using IeA images. Students' perceptions of usability and utility were measured by survey (n = 115; 57% of the class) revealing mean access of 2.3 times per week during the 12-week semester and a median time of three hours of use. Ratings for usability and utility were high, with means ranging between 4.24 and 4.54 (five-point Likert scale; 5 = strongly agree). Written responses told a similar story for both usability and utility. The role of providing basic computer-assisted learning support for a large first-year class is discussed in the context of current research into student-centered resources and blended learning in human anatomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distance education has developed in the past 25 years or so as a way of supplying education to people who would not have access to local college education facilities. This includes students who live in remote regions, students who lack mobility, and students with full-time jobs. More recently this has been renamed to "online learning". Deakin University in Australia has been teaching freshman engineering physics simultaneously to on-campus and online students since the late1990's. The course is part of an online Bachelor of Engineering major that is accredited by the Institution of Engineers Australia.* In this way Deakin answers the call to provide engineering education "anywhere, anytime."**The course has developed and improved with the available educational technology. Starting with printed study guides, a textbook, CD-ROMS, and snail-mail, and telephone/email correspondence with students, the course has seen the rise of websites, online course notes, discussion boards, streamed video lectures, web-conferencing classes and lab sessions, and online submission of student work. Most recently the on-campus version of the course has shifted from a traditional lecture/tutorial/lab format to a flipped-classroom format. The use of lectures has been reduced while the use of tutorials and practical exercises has increased. Primary learning is now accomplished by watching videos prepared by the lecturer and studying the textbook.Offering this course for several years by distance education made this process considerably easier. Most of the educational "infrastructure" was already in place, and the course's delivery to a non-classroom cohort was already established. Thus many elements of the new structure did not have to be produced from scratch. Improvements to the course website and all the course material has benefited all students, both online and on-campus.The new course structure was delivered for the first time in 2014, has run for two semesters, and will continue in 2015. Student learning and performance is being measured by assignment and exam marks for both on-campus and off-campus students. Students are also surveyed to gauge how well they received the new innovations, especially the video presentations on the lab experiments. It was found that student performance in the new structure was no worse than that in the older structure (average on-campus grades increased 10%), and students in general welcomed the changes. Similar transitions are being implemented in other courses in Deakin's engineering degree program.This presentation will show how physics is taught to online students, outline the changes made to support flipping the on-campus classroom, and how that process benefited the off-campus cohort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2005 the Sloan Consortium called for engineering education to be available "anywhere, anytime."* Increasing numbers of engineering departments are interesting in offering their programs by means of online learning. These schools grapple with several difficulties and issues associated with wholly online learning: course structure, communication with students, delivery of course material, delivery of exams, accreditation, equity between on-campus and off-campusstudents, and especially the delivery of practical training. Deakin University faced these same challenges when it commenced teaching undergraduate engineering via distance education in the early 1990's. It now offers a fully accredited Bachelor of Engineering degree in both on-campus and off-campus modes, with majors that include civil,mechanical, electrical/electronics, and mechatronics/robotics.This presentation describes Deakin's unique off-campus delivery, students, curricula, approaches to practical work, and solutions to the problems mentioned above. Attendees will experience how Deakin Engineering delivers course materials, communicates with off-campus students, runs off-campus classes, and even delivers lab experience to students living thousands of miles away from the home campus. On display will be experimental lab kits, video presentations, student projects, and online broadcasts of freshman lab experiments. Participants will have the opportunity to see some of these resources hands-on. I will also discuss recent innovations in off-campus delivery ofcourses, including how flipping the classroom has led to blended learning with the on-campus students.Many universities have placed engineering distance education into the too-hard basket. Deakin Engineering demonstrates that it is possible to deliver a full undergraduate degree by means of distance education and online learning, and modern technology makes the job easier than everbefore. The benefits to the professor are many, not the least of which is helping a student living in a remote area or with a full-time job become fully trained and qualified in engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in identifying inorganic material affinity classes for peptide sequences due to the development of bionanotechnology and its wide applications. In particular, a selective model capable of learning cross-material affinity patterns can help us design peptide sequences with desired binding selectivity for one inorganic material over another. However, as a newly emerging topic, there are several distinct challenges of it that limit the performance of many existing peptide sequence classification algorithms. In this paper, we propose a novel framework to identify affinity classes for peptide sequences across inorganic materials. After enlarging our dataset by simulating peptide sequences, we use a context learning based method to obtain the vector representation of each amino acid and each peptide sequence. By analyzing the structure and affinity class of each peptide sequence, we are able to capture the semantics of amino acids and peptide sequences in a vector space. At the last step we train our classifier based on these vector features and the heuristic rules. The construction of our models gives us the potential to overcome the challenges of this task and the empirical results show the effectiveness of our models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common response to the need to place increasing numbers of social work students in field education or practice learning placements has been to broaden the range of organisations in which placements are sought. While this strategy has provided many beneficial learning opportunities for students, it has not been sufficient in tackling ongoing difficulties in locating work-integrated learning opportunities for social work students. We argue that new approaches to finding placement opportunities will require a fundamental rethink as to how student placements are understood. This paper introduces an innovative project which started with a consideration of learning opportunities and built a structure to facilitate these, rather than rely on organisational availability to host students on placements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical Dirichlet processes (HDP) was originally designed and experimented for a single data channel. In this paper we enhanced its ability to model heterogeneous data using a richer structure for the base measure being a product-space. The enhanced model, called Product Space HDP (PS-HDP), can (1) simultaneously model heterogeneous data from multiple sources in a Bayesian nonparametric framework and (2) discover multilevel latent structures from data to result in different types of topics/latent structures that can be explained jointly. We experimented with the MDC dataset, a large and real-world data collected from mobile phones. Our goal was to discover identity–location– time (a.k.a who-where-when) patterns at different levels (globally for all groups and locally for each group). We provided analysis on the activities and patterns learned from our model, visualized, compared and contrasted with the ground-truth to demonstrate the merit of the proposed framework. We further quantitatively evaluated and reported its performance using standard metrics including F1-score, NMI, RI, and purity. We also compared the performance of the PS-HDP model with those of popular existing clustering methods (including K-Means, NNMF, GMM, DP-Means, and AP). Lastly, we demonstrate the ability of the model in learning activities with missing data, a common problem encountered in pervasive and ubiquitous computing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within academic institutions, writing centers are uniquely situated, socially rich sites for exploring learning and literacy. I examine the work of the Michigan Tech Writing Center's UN 1002 World Cultures study teams primarily because student participants and Writing Center coaches are actively engaged in structuring their own learning and meaning-making processes. My research reveals that learning is closely linked to identity formation and leading the teams is an important component of the coaches' educational experiences. I argue that supporting this type of learning requires an expanded understanding of literacy and significant changes to how learning environments are conceptualized and developed. This ethnographic study draws on data collected from recordings and observations of one semester of team sessions, my own experiences as a team coach and UN 1002 teaching assistant, and interviews with Center coaches prior to their graduation. I argue that traditional forms of assessment and analysis emerging from individualized instruction models of learning cannot fully account for the dense configurations of social interactions identified in the Center's program. Instead, I view the Center as an open system and employ social theories of learning and literacy to uncover how the negotiation of meaning in one context influences and is influenced by structures and interactions within as well as beyond its boundaries. I focus on the program design, its enaction in practice, and how engagement in this type of writing center work influences coaches' learning trajectories. I conclude that, viewed as participation in a community of practice, the learning theory informing the program design supports identity formation —a key aspect of learning as argued by Etienne Wenger (1998). The findings of this study challenge misconceptions of peer learning both in writing centers and higher education that relegate peer tutoring to the role of support for individualized models of learning. Instead, this dissertation calls for consideration of new designs that incorporate peer learning as an integral component. Designing learning contexts that cultivate and support the formation of new identities is complex, involves a flexible and opportunistic design structure, and requires the availability of multiple forms of participation and connections across contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the present work is to develop some strategies based on research in neurosciences that contribute to the teaching and learning of mathematics. The interrelationship of education with the brain, as well as the relationship of cerebral structures with mathematical thinking was discussed. Strategies were developed taking into consideration levels that include cognitive, semiotic, language, affect and the overcoming of phobias to the subject. The fundamental conclusion was the imperative educational requirement in the near future of a new teacher, whose pedagogic formation must include the knowledge on the cerebral function, its structures and its implications to education, as well as a change in pedagogy and curricular structure in the teaching of mathematics.