918 resultados para Dynamic Learning Capabilities
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
Business and IT alignment has continued as a top concern for business and IT executives for almost three decades. Many researchers have conducted empirical studies on the relationship between business-IT alignment and performance. Yet, these approaches, lacking a social perspective, have had little impact on sustaining performance and competitive advantage. In addition to the limited alignment literature that explores organisational learning that is represented in shared understanding, communication, cognitive maps and experiences. Hence, this paper proposes an integrated process that enables social and intellectual dimensions through the concept of organisational learning. In particular, the feedback and feed- forward process which provide a value creation across dynamic multilevel of learning. This mechanism enables on-going effectiveness through development of individuals, groups and organisations, which improves the quality of business and IT strategies and drives to performance.
Resumo:
Business and IT alignment is increasingly acknowledged as a key for organisational performance. However, alignment research lack to mechanisms that enable for on-going process with multi-level effects. Multi-level learning allows on-going effectiveness through development of the organisation and improved quality of business and IT strategies. In particular, exploration and exploitation enable effective process of alignment across dynamic multi-level of learning. Hence, this paper proposes a conceptual framework that links multi-level learning and business-IT strategy through the concept of exploration and exploitation, which considers short-term and long-term alignment together to address the challenges of strategic alignment faced in sustaining organisational performance.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
Despite the wealth of valuable information that has been generated by motivation studies to date, there are certain limitations in the common approaches. Quantitative and psychometric approaches to motivation research that have dominated in recent decades provided epiphenomenal descriptions of learner motivation within different contexts. However, these approaches assume homogeneity within a given group and often mask the variation between learners within the same, and different, contexts. Although these studies have provided empirical data to form and validate theoretical constructs, they have failed to recognise learners as individual ‘people’ that interact with their context. Learning context has become increasingly explicit in motivation studies, (see Coleman et al. 2007 and Housen et al. 2011), however it is generally considered as a background variable which is pre-existing and external to the individual. Stemming from the recent ‘social turn’ (Block 2003) in SLA research from a more cognitive-linguistic perspective to a more context-specific view of language learning, there has been an upsurge in demand for a greater focus on the ‘person in context’ in motivation research (Ushioda 2011). This paper reports on the findings of a longitudinal study of young English learners of French as they transition from primary to secondary school. Over 12 months, the study employed a mixed-method approach in order to gain an in-depth understanding of how the learners’ context influenced attitudes to language learning. The questionnaire results show that whilst the learners displayed some consistent and stable motivational traits over the 12 months, there were significant differences for learners within different contexts in terms of their attitudes to the language classroom and their levels of self-confidence. A subsequent examination of the qualitative focus group data provided an insight into how and why these attitudes were formed and emphasised the dynamic and complex interplay between learners and their context.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
As the built environment accounts for much of the world's emissions, resource consumption and waste, concerns remain as to how sustainable the sector is. Understanding how such concerns can be better managed is complex, with a range of competing agendas and institutional forces at play. This is especially the case in Nigeria where there are often differing priorities, weak regulations and institutions to deal with this challenge. Construction firms are in competition with each other in a market that is growing in size and sophistication yearly. The business case for sustainability has been argued severally in literature. However, the capability of construction firms with respect to sustainability in Nigeria has not been studied. This paper presents the preliminary findings of an exploratory multi-case study carried out to understand the firm's views on sustainability as a source of competitive advantage. A international firm and a lower medium-sized indigenous firm were selected for this purpose. Qualitative interviews were conducted with top-level management of both firms, with key themes from the sustainable construction and dynamic capabilities literature informing the case study protocol. The interviews were transcribed and analysed with the use of NVivo software. The findings suggest that the multinational firm is better grounded in sustainability knowledge. Although the level of awareness and demand for sustainable construction is generally very poor, few international clients are beginning to stimulate interest in sustainable buildings. This has triggered both firms to build their capabilities in that regard, albeit in an unhurried manner. Both firms agree on the potentials of market-driven sustainability in the long term. Nonetheless, more drastic actions are required to accelerate the sustainable construction agenda in Nigeria.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Science centres are one of the best opportunities for informal study of natural science. There are many advantages to learn in the science centres compared with the traditional methods: it is possible to motivate and supply visitors with the social experience, to improve people’s understandings and attitudes, thereby bringing on and attaching wider interest towards natural science. In the science centres, pupils show interest, enthusiasm, motivation, self-confidence, sensitiveness and also they are more open and eager to learn. Traditional school-classes however mostly do not favour these capabilities. This research presents the qualitative study in the science centre. Data was gathered from observations and interviews at Science North science centre in Canada. Pupils’ learning behaviours were studied at different exhibits in the science centre. Learning behaviours are classified as follows: labels reading, experimenting with the exhibits, observing others or exhibit, using guide, repeating the activity, positive emotional response, acknowledged relevance, seeking and sharing information. In this research, it became clear that in general pupils do not read labels; in most cases pupils do not use the guides help; pupils prefer exhibits that enable high level of interactivity; pupils display more learning behaviours at exhibits that enable a high level of interactivity.
Resumo:
This dissertation aims at examining empirical evidences obtained in the light of taxonomies and strategies for measuring firms technological capabilities and innovation in the context of developing countries, motivated by the fact that debates and studies directed to innovation has been intensified, for the last thirty years, by the recognition of its vital and growing importance to the technological, economic, competitive and industrial development of firms and countries. Two main tendencies can be identified on this debate. At one side, it¿s the literature related to the developed countries logic, whose companies are, in majority, positioned at the technological frontier, characterized by the domain of innovative advanced capabilities, directed to its sustaining, deepening and renewal. At the other side, there are the perspectives directed to the developing countries reality, where there is a prevalence of companies with deficiency of resources, still in process of accumulating basic and intermediate technological capabilities, with characteristics and technological development trajectories distinct or even reverse from those of developing countries. From this last tradition of studies, the measuring approaches based in C&T indicators and in types and levels of technological capabilities stand out. The first offers a macro level, aggregated perspective, through the analysis of a representative sample of firms, seeking to the generation of internationally comparable data, without addressing the intraorganizational specificities and nuances of the paths of technological accumulation developed by the firms, using, mostly, R&D statistics, patents, individual qualifications, indicators that carry their own limitations. On the other hand, studies that examine types and levels of technological capabilities are scarce, usually directed to a small sample of firms and/or industrial sectors. Therefore, in the light of the focus and potentialities of each of the perspectives, this scenario exposes a lack of studies that examine, in a parallel and complementary way, both types of strategies, seeking to offer more realistic, consistent and concrete information about the technological reality of developing countries. In order to close this gap, this dissertation examines (i) strategies of innovation measurement in the contexts of developing countries based on traditional approaches and C&T indicators, represented by four innovation surveys - ECIB, PINTEC, PAEP and EAI, and, (ii) from the perspective of technological capabilities as an intrinsic resource of the firm, the development of which occurs in a cumulative way and based on learning, presents and extracts generalizations of empirical applications of a metric that identifies types and levels of technological capabilities, through a dynamic and intra-firm perspective. The exam of the empirical evidences of the two approaches showed what each one of the metrics are capable to offer and the way they can contribute to the generation of information that reflect the technological development of specific industrial sectors in developing countries. In spite of the fact that the focus, objective, perspective, inclusion, scope and lens used are substantially distinct, generating, on a side, an aggregated view, and of other, an intra-sector, intra-organizational and specific view, the results suggest that the use of one doesn't implicate discarding or abdicating the other. On the contrary, using both in a complementary way means the generation of more complete, rich and relevant evidences and analysis that offer a realistic notion of the industrial development and contribute in a more direct way to the design of corporate strategies and government policies, including those directed to the macro level aspects just as those more specific and focused, designed to increment and foment firms in-house innovative efforts.
Resumo:
We analyze a dynamic principal–agent model where an infinitely-lived principal faces a sequence of finitely-lived agents who differ in their ability to produce output. The ability of an agent is initially unknown to both him and the principal. An agent’s effort affects the information on ability that is conveyed by performance. We characterize the equilibrium contracts and show that they display short–term commitment to employment when the impact of effort on output is persistent but delayed. By providing insurance against early termination, commitment encourages agents to exert effort, and thus improves on the principal’s ability to identify their talent. We argue that this helps explain the use of probationary appointments in environments in which there exists uncertainty about individual ability.