987 resultados para explicit och implicit inl
Resumo:
This thesis focuses on “livsfrågor” (questions of life) a typical Swedish concept introduced in the RE syllabus in the curriculum for compulsory schools in 1969. The study poses three questions: what can qualify as a “livsfråga”, why are they regarded important, and how do they fit into teaching? The main purpose is to study differences of the concept in two materials. Primarily interviews with Teacher educators all over Sweden and, secondly in the R.E. syllabus for compulsory and secondary schools from 1962 until today. Finally, the two materials used, will be brought together, and foci are recognized with the help of a tool for thought. The study is using the concept dialogicity from Bachtin. Syllabus are viewed as compromises in accordance with a German tradition. In the syllabus, “livsfrågor” is one within many different words used with none what so ever stringency. It is not necessarily the most important term, as “livsåskådningsfrågor” (questions within philosophies of life) is often dominating in objectivities. Also “existential questions” etc is used. The relation between the words are never made clear. The syllabus are in one sense monologial as different meanings of the word are not made explicit, and other utterances are not invoked. In the interviews the dialogicity is more obvious. Philosophy is mentioned, eg.. Martin Buber, Viktor Frankl, theology (Paul Thillich), but also literature (Lars Gyllensten) and existentialism in a general sence. Other words are not as frequent – but “livsåskådningsfrågor” are of course mentioned, eg. Faith vs. knowledge. In the last chapter “livsfrågor” is problematized with the help of Andrew Wright and his three metanarrativies within the modern R.E. And the assumption, especially in the syllabus, of “livsfrågor”, as common between cultures and over time is problematized with the help of . feministic theory of knowledge.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Syftet med avhandlingen har varit att granska finska inlärares konnektorbruk på CEFR-nivåerna A1, A2 och B1 longitudinellt ur ett funktionellt perspektiv. Jag har studerat vad som är kännetecknande för konnektorbruket på dessa CEFR-nivåer och i vilken funktion konnektorerna har använts på dessa nivåer. Vidare har jämförts konnektorbruket i materialet med det som sägs i CEFR-kriterierna. Slutligen har jag också granskat hur konnektorbruket utvecklas. Som material har jag använt berättande texter (n=303) skrivna av 101 finskspråkiga grundskolelever och gymnasister. Materialet ingår i projektet Topling – Inlärningsgångar i andraspråket vid Jyväskylä universitet. I avhandlingen har använts såväl kvantitativa som kvalitativa metoder. Jag har räknat konnektorernas och konnektorkategoriernas frekvenser samt analyserat i vilka funktioner konnektorerna har använts. I den funktionella analysen har använts systemisk-funktionell lingvistik (Halliday & Matthiessen 2004) samt Labovs (1972) modell om berättelsestrukturen. Analysen har visat att konnektorbruket skiljer sig mellan CEFR-nivåerna A1, A2 och B1. Antalet konnektorer ökar såväl från nivå A1 till A2 som från nivå A2 till nivå B1 och andelen additiva och målspråksavvikande konnektorer minskar medan andelen temporala, kausala och komparativa konnektorer samt att ökar. Konnektorerna har använts först och främst i deras prototypiska funktioner på alla dessa CEFR-nivåer. Vissa konnektorer (när, eftersom, att) verkar även ha en funktion i berättelsestrukturen. Om man jämför konnektorbruket med CEFR-kriterierna kan man konstatera att inlärare på nivå A1 använder pronomenet den i stället för sedan även om denna konnektor nämns i CEFR-kriterierna på nivå A1. Konnektorbruket verkar utvecklas på det sättet att antalet konnektorer samt andelen additiva, temporala och komparativa konnektorer samt att ökar. Andelen additiva konnektorer och målspråksavvikande konnektorer minskar. Vidare börjar inlärare använda mera olika konnektorer och på nivå B1 även mindre frekventa konnektorer som om och fast. I fortsättningen borde man granska konnektorbruket i olika texttyper samt studera om explicit undervisning påverkar inlärares konnektorbruk.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness’ memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.
Resumo:
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.
Resumo:
This text offers some contributions to the debate on the changes proposed to the National Curricular Directives to reform secondary education in Brazil. In the first part, the political and economic scene is evaluated as the context which generated the last stage of reforms in the educational field in the 90s. It questions the option for a model of structural reform (in the Brazilian case more restricted to the Program for Reform of Professional Education - PROEP) and of the curriculum, whose themes find their justification in the contemporary economic, social cultural and political context. It discusses the use of a model that bases itself on experiences developed in other countries and takes the international orientation of the multilateral organizations as its theoretical methodological reference, leaving out the peculiarities and injunctions of the Brazilian political administrative system. Such a policy measure can increase the tension and distance normally existing between government programs and the possibility of their real implementation in the school network. In the second part, it discusses the Resolution of the National Education Council, the Congress on Basic Education, no.3, of 16.698 that instituted the National Curricular Directives for secondary education, as well as the Legal Bases - Part I - of the National Curricular Parameters for secondary education. The analysis of official discourse takes Bardin's (1977, p. 209) proposals as its methodological reference for the models of structural analysis, seeking to make the implicit values and the connotations of the legal texts explicit
Resumo:
O trabalho busca integrar, com base em propostas recentes de vários autores, perspectivas acerca da aprendizagem concebidas como mutuamente excludentes. Essa reflexão se justifica em vista da importância de não se introduzir descontinuidade filogenética em um processo concebido como adaptativo, mas que é também cultural. Assim, são examinadas propostas acerca da coevolução da mente humana e da cultura que apoiariam tal perspectiva, propondo-se uma visão integrada da aprendizagem como um conjunto de processos organizados em um continuum implícito-explícito.
Resumo:
It has been suggested that the temporal control of rhythmic unimianual movements is different between tasks requiring continuous (e.g., circle drawing) and discontinuous movements (e.g., finger tapping). Specifically, for continuous movements temporal regularities are ail emergent property, whereas for tasks that involve discontinuities timing is ail explicit part of the action goal. The present experiment further investigated the control of continuous and discontinuous movements by comparing the coordination dynamics and attentional demands of bimanual continuous circle drawing with bimanual intermittent circle drawing. The intermittent task required participants to insert a 400 ms pause between each cycle while circling. Using dual-task methodology, 15 right-handed participants performed the two circle drawing tasks, while vocally responding to randomly presented auditory probes. The circle drawing tasks were performed in symmetrical and asymmetrical coordination modes and at movement frequencies of 1 Hz and 1.7 Hz. Intermittent circle drawing exhibited superior spatial and temporal accuracy and stability than continuous circle drawing supporting the hypothesis that the two tasks have different underlying control processes. In terms of attentional cost, probe RT was significantly slower during the intermittent circle drawing task than the continuous circle drawing task across both coordination modes and movement frequencies. Of interest was the finding that in the intermittent circling task reaction time (RT) to probes presented during the pause between cycles did not differ from the RT to probes occurring during the circling movement. The differences in attentional demands between the intermittent and continuous circle drawing tasks may reflect the operation of explicit event timing and implicit emergent timing processes, respectively. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
Subcycling algorithms which employ multiple timesteps have been previously proposed for explicit direct integration of first- and second-order systems of equations arising in finite element analysis, as well as for integration using explicit/implicit partitions of a model. The author has recently extended this work to implicit/implicit multi-timestep partitions of both first- and second-order systems. In this paper, improved algorithms for multi-timestep implicit integration are introduced, that overcome some weaknesses of those proposed previously. In particular, in the second-order case, improved stability is obtained. Some of the energy conservation properties of the Newmark family of algorithms are shown to be preserved in the new multi-timestep extensions of the Newmark method. In the first-order case, the generalized trapezoidal rule is extended to multiple timesteps, in a simple way that permits an implicit/implicit partition. Explicit special cases of the present algorithms exist. These are compared to algorithms proposed previously. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
A new method of estimating the economic value of life is proposed. Using cross-country data, an equation is estimated to explain life expectancy as a function of real consumption of goods and services. The associated cost function for life expectancy in terms of the prices of specific goods and services is used to estimate the cost of a reduction in age-specific mortality rates sufficient to save the life of one person. The cost of saving a life in OECD countries is as much as 1000 times that in the poorest countries. Ethical implications are discussed.
Resumo:
Methods employing continuum approximation in describing the deformation of layered materials possess a clear advantage over explicit models, However, the conventional implicit models based on the theory of anisotropic continua suffers from certain difficulties associated with interface slip and internal instabilities. These difficulties can be remedied by considering the bending stiffness of the layers. This implies the introduction of moment (couple) stresses and internal rotations, which leads to a Cosserat-type theory. In the present model, the behaviour of the layered material is assumed to be linearly elastic; the interfaces are assumed to be elastic perfectly plastic. Conditions of slip or no slip at the interfaces are detected by a Coulomb criterion with tension cut off at zero normal stress. The theory is valid for large deformation analysis. The model is incorporated into the finite element program AFENA and validated against analytical solutions of elementary buckling problems in layered medium. A problem associated with buckling of the roof and the floor of a rectangular excavation in jointed rock mass under high horizontal in situ stresses is considered as the main application of the theory. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
Recent research has begun to provide support for the assumptions that memories are stored as a composite and are accessed in parallel (Tehan & Humphreys, 1998). New predictions derived from these assumptions and from the Chappell and Humphreys (1994) implementation of these assumptions were tested. In three experiments, subjects studied relatively short lists of words. Some of the Lists contained two similar targets (thief and theft) or two dissimilar targets (thief and steal) associated with the same cue (ROBBERY). AS predicted, target similarity affected performance in cued recall but not free association. Contrary to predictions, two spaced presentations of a target did not improve performance in free association. Two additional experiments confirmed and extended this finding. Several alternative explanations for the target similarity effect, which incorporate assumptions about separate representations and sequential search, are rejected. The importance of the finding that, in at least one implicit memory paradigm, repetition does not improve performance is also discussed.
Resumo:
1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.