971 resultados para explicit läsundervisning
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
vol.I. Introduction to Athyrium.--vol.II. Blechnum to Nothochlaena.--vol.III. Ochropteris to Woodwardia, and Selaginella.
Resumo:
Most second language researchers agree that there is a role for corrective feedback in second language writing classes. However, many unanswered questions remain concerning which linguistic features to target and the type and amount of feedback to offer. This study examined two new pieces of writing by 151 learners of English as a Second Language (ESL), in order to investigate the effect of direct and metalinguistic written feedback on errors with the simple past tense, the present perfect tense, dropped pronouns, and pronominal duplication. This inquiry also considered the extent to which learner differences in language-analytic ability (LAA), as measured by the LLAMA F, mediated the effects of these two types of explicit written corrective feedback. Learners in the feedback groups were provided with corrective feedback on two essays, after which learners in all three groups completed two additional writing tasks to determine whether or not the provision of corrective feedback led to greater gains in accuracy compared to no feedback. Both treatment groups, direct and metalinguistic, performed better than the comparison group on new pieces of writing immediately following the treatment sessions, yet direct feedback was more durable than metalinguistic feedback for one structure, the simple past tense. Participants with greater LAA proved more likely to achieve gains in the direct feedback group than in the metalinguistic group, whereas learners with lower LAA benefited more from metalinguistic feedback. Overall, the findings of the present study confirm the results of prior studies that have found a positive role for written corrective feedback in instructed second language acquisition.
Resumo:
Detalles físicos: Encuadernación flexible en pergamino, en regular estado de conservación (arrugado, manchado). Visibles maculaturas. Guardas elaboradas con sobrantes de imprenta. Texto sin encabezamiento, bien impreso en 38 líneas sobre papel de calidad con caracteres góticos, de dos tamaños. Signaturas. Colofón. Visibles manchas de humedad. Dotación según notas de la fichas en antiguo catalogo. Incunable. Título y datos de publicación tomados del colofón. Incluye índice. Desde "abiuratio" (= abjuración), hasta "zizania" (=cizaña). Es un manual, en orden alfabético, acerca de cómo proceder con los herejes y los apóstatas en diversos casos. Tanto en materias de derecho civil como de derecho eclesiástico.
Resumo:
Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
Resumo:
The ability of agents and services to automatically locate and interact with unknown partners is a goal for both the semantic web and web services. This, \serendipitous interoperability", is hindered by the lack of an explicit means of describing what services (or agents) are able to do, that is, their capabilities. At present, informal descriptions of what services can do are found in \documentation" elements; or they are somehow encoded in operation names and signatures. We show, by ref- erence to existing service examples, how ambiguous and imprecise capa- bility descriptions hamper the attainment of automated interoperability goals in the open, global web environment. In this paper we propose a structured, machine readable description of capabilities, which may help to increase the recall and precision of service discovery mechanisms. Our capability description draws on previous work in capability and process modeling and allows the incorporation of external classi¯cation schemes. The capability description is presented as a conceptual meta model. The model supports conceptual queries and can be used as an extension to the DAML-S Service Pro¯le.
Resumo:
The next phase envisioned for the World Wide Web is automated ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. Although at present this appears to be a distant objective, there are practical steps that can be taken to advance the vision. We propose an extension to classical conceptual models to allow the definition of application components in terms of public standards and explicit semantics, thus building into web-based applications, the foundation for shared understanding and interoperability. The use of external definitions and the need to store outsourced type information internally, brings to light the issue of object identity in a global environment, where object instances may be identified by multiple externally controlled identification schemes. We illustrate how traditional conceptual models may be augmented to recognise and deal with multiple identities.
Resumo:
The following paper considers the question, where to office property? In doing so, it focuses, in the first instance, on identifying and describing a selection of key forces for change present within the contemporary operating environment in which office property functions. Given the increasingly complex, dynamic and multi-faceted character of this environment, the paper seeks to identify only the primary forces for change, within the context of the future of office property. These core drivers of change have, for the purposes of this discussion, been characterised as including a range of economic, demographic and socio-cultural factors, together with developments in information and communication technology. Having established this foundation, the paper proceeds to consider the manner in which these forces may, in the future, be manifested within the office property market. Comment is offered regarding the potential future implications of these forces for change together with their likely influence on the nature and management of the physical asset itself. Whilst no explicit time horizon has been envisioned in the preparation of this paper particular attention has been accorded short to medium term trends, that is, those likely to emerge in the office property marketplace over the coming two decades. Further, the paper considers the question posed, in respect of the future of office property, in the context of developed western nations. The degree of commonality seen in these mature markets is such that generalisations may more appropriately and robustly be applied. Whilst some of the comments offered with respect to the target market may find application in other arenas, it is beyond the scope of this paper to explicitly consider highly heterogeneous markets. Given also the wide scope of this paper key drivers for change and their likely implications for the commercial office property market are identified at a global level (within the above established parameters). Accordingly, the focus is necessarily such that it serves to reflect overarching directions at a universal level (with the effect being that direct applicability to individual markets - when viewed in isolation on a geographic or property type specific basis – may not be fitting in all instances)
Resumo:
Bomb attacks carried out by terrorists, targeting high occupancy buildings, have become increasingly common in recent times. Large numbers of casualties and property damage result from overpressure of the blast followed by failing of structural elements. Understanding the blast response of multi-storey buildings and evaluating their remaining life have therefore become important. Response and damage analysis of single structural components, such as columns or slabs, to explosive loads have been examined in the literature, but the studies on blast response and damage analysis of structural frames in multi-storey buildings is limited and this is necessary for assessing the vulnerability of them. This paper investigates the blast response and damage evaluation of reinforced concrete (RC) frames, designed for normal gravity loads, in order to evaluate their remaining life. Numerical modelling and analysis were carried out using the explicit finite element software, LS DYNA. The modelling and analysis takes into consideration reinforcement details together and material performance under higher strain rates. Damage indices for columns are calculated based on their residual and original capacities. Numerical results generated in the can be used to identify relationships between the blast load parameters and the column damage. Damage index curve will provide a simple means for assessing the damage to a typical multi-storey building RC frame under an external bomb circumstance.
Resumo:
Knowledge has been recognised as an important organisational asset that increases in value when shared; the opposite to other organisational assets which decrease in value during their exploitation. Effective knowledge transfer in organisations helps to achieve and maintain competitive advantage and ultimately organisational success. So far, the research on knowledge transfer has focused on traditional (functional) organisations. Only recently has attention been directed towards knowledge transfer in projects. Existing research on project learning has recognised the need for knowledge transfer within and across projects in project-based organisations (PBOs). Most projects can provide valuable new knowledge from unexpected actions, approaches or problems experienced during the project phases. The aim of this paper is to demonstrate the impact of unique projects characteristics on knowledge transfer in PBO. This is accomplished through review of the literature and a series of interviews with senior project practitioners. The interviews complement the findings from the literature. Knowledge transfer in projects occurs by social communication and transfer of lessons learned where project management offices (PMOs) and project managers play significant roles in enhancing knowledge transfer and communication within the PBO and across projects. They act as connectors between projects and the PBO ‘hub’. Moreover, some project management processes naturally facilitate knowledge transfer across projects. On the other hand, PBOs face communication challenges due to unique and temporary characteristics of projects. The distance between projects and the lack or weakness of formal links across projects, create communication problems that impede knowledge transfer across projects. The main contribution of this paper is to demonstrate that both social communication and explicit informational channels play important role in inter-project knowledge transfer. Interviews also revealed the important role organisational culture play in knowledge transfer in PBOs.
Resumo:
In this study, Lampert examines how cultural identities are constructed within fictional texts for young people written about the attacks on the Twin Towers. It identifi es three significant identity categories encoded in 9/11 books for children:ethnic identities, national identities, and heroic identities,arguing that the identities formed within the selected children’s texts are in flux, privileging performances of identities that are contingent on post-9/11 politics. Looking at texts including picture books, young adult fiction, and a selection of DC Comics, Lampert finds in post-9/11 children’s literature a co-mingling of xenophobia and tolerance; a binaried competition between good and evil and global harmony and national insularity; and a lauding of both the commonplace hero and the super-human. The shifting identities evident in texts that are being produced for children about 9/11 offer implicit and explicit accounts of what constitutes good citizenship, loyalty to nation and community, and desirable attributes in a Western post-9/11 context. This book makes an original contribution to the field of children’s literature by providing a focused and sustained analysis of how texts for children about 9/11 contribute to formations of identity in these complex times of cultural unease and global unrest.
Resumo:
Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.