23 resultados para Learning Networks
Resumo:
Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions.
Resumo:
Pattern separation is a new technique in digital learning networks which can be used to detect state conflicts. This letter describes pattern separation in a simple single-layer network, and an application of the technique in networks with feedback.
Resumo:
The problem of adjusting the weights (learning) in multilayer feedforward neural networks (NN) is known to be of a high importance when utilizing NN techniques in various practical applications. The learning procedure is to be performed as fast as possible and in a simple computational fashion, the two requirements which are usually not satisfied practically by the methods developed so far. Moreover, the presence of random inaccuracies are usually not taken into account. In view of these three issues, an alternative stochastic approximation approach discussed in the paper, seems to be very promising.
Resumo:
The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.
Resumo:
For several years, online educational tools such as Blackboard have been used by Universities to foster collaborative learning in an online setting. Such tools tend to be implemented in a top-down fashion, with the institution providing the tool to the students and instructing them to use it. Recently, however, a more informal, bottom up approach is increasingly being employed by the students themselves in the form of social networks such as Facebook. With over 9,000 registered Facebook users at the beginning of this study, rising to over 12,000 at the University of Reading alone, Facebook is becoming the de facto social network of choice for higher education students in the UK, and there was increasing anecdotal evidence that students were actively learning via Facebook rather than through BlackBoard. To test the validity of these anecdotes, a questionnaire was sent to students, asking them about their learning experiences via BlackBoard and Facebook. The results show that students are making use of the tools available to them even when there is no formal academic content, and that increased use of a social networking tool is correlated with a reported increase in learning as a result of that use.
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
Self-Organizing Map (SOM) algorithm has been extensively used for analysis and classification problems. For this kind of problems, datasets become more and more large and it is necessary to speed up the SOM learning. In this paper we present an application of the Simulated Annealing (SA) procedure to the SOM learning algorithm. The goal of the algorithm is to obtain fast learning and better performance in terms of matching of input data and regularity of the obtained map. An advantage of the proposed technique is that it preserves the simplicity of the basic algorithm. Several tests, carried out on different large datasets, demonstrate the effectiveness of the proposed algorithm in comparison with the original SOM and with some of its modification introduced to speed-up the learning.
Resumo:
New conceptual ideas on network architectures have been proposed in the recent past. Current store-andforward routers are replaced by active intermediate systems, which are able to perform computations on transient packets, in a way that results very helpful for developing and deploying new protocols in a short time. This paper introduces a new routing algorithm, based on a congestion metric, and inspired by the behavior of ants in nature. The use of the Active Networks paradigm associated with a cooperative learning environment produces a robust, decentralized algorithm capable of adapting quickly to changing conditions.
Resumo:
This report addresses the extent that managerial practices can be shared between the aerospace and construction sectors. Current recipes for learning from other industries tend to be oversimplistic and often fail to recognise the embedded and contextual nature of managerial knowledge. Knowledge sharing between business sectors is best understood as an essential source of innovation. The process of comparison challenges assumptions and better equips managers to cope with future change. Comparisons between the aerospace and construction sectors are especially useful because they are so different. The two sectors differ hugely in terms of their institutional context, structure and technological intensity. The aerospace sector has experienced extensive consolidation and is dominated by a small number of global companies. Aerospace companies operate within complex networks of global interdependency such that collaborative working is a commercial imperative. In contrast, the construction sector remains highly fragmented and is characterised by a continued reliance on small firms. The vast majority of construction firms compete within localised markets that are too often characterised by opportunistic behaviour. Comparing construction to aerospace highlights the unique characteristics of both sectors and helps explain how managerial practices are mediated by context. Detailed comparisons between the two sectors are made in a range of areas and guidance is provided for the implementation of knowledge sharing strategies within and across organisations. The commonly accepted notion of ‘best practice’ is exposed as a myth. Indeed, universal models of best practice can be detrimental to performance by deflecting from the need to adapt continuously to changing circumstances. Competitiveness in the construction sector too often rests on efficiency in managing contracts, with a particular emphasis on the allocation of risk. Innovation in construction tends to be problem-driven and is rarely shared from project to project. In aerospace, the dominant model of competitiveness means that firms have little choice other than to invest in continuous innovation, despite difficult trading conditions. Research and development (R&D) expenditure in aerospace continues to rise as a percentage of turnovers. A sustained capacity for innovation within the aerospace sector depends crucially upon stability and continuity of work. In the construction sector, the emergence of the ‘hollowed-out’ firm has undermined the industry’s capacity for innovation. Integrated procurement contexts such as prime contracting in construction potentially provide a more supportive climate for an innovation-based model of competitiveness. However, investment in new ways of working depends upon a shift in thinking not only amongst construction contractors, but also amongst the industry’s major clients.
Resumo:
Research is described that sought to understand how senior managers within regional contracting firms conceptualize and enact competitiveness. Existing formal discourses of construction competitiveness include the discourse of 'best practice' and the various theories of competitiveness as routinely mobilized within the academic literature. Such discourses consistently underplay the influence of contextual factors in shaping how competitiveness is enacted. An alternative discourse of competitiveness is outlined based on the concepts of localized learning and embeddedness. Two case studies of regional construction firms provide new insights into the emergent discourses of construction competitiveness. The empirical findings resonate strongly with the concepts of localized learning and embeddedness. The case studies illustrate the importance of de-centralized structures which enable multiple business units to become embedded within localized markets. A significant degree of autonomy is essential to facilitate localized entrepreneurial behaviour. In essence, sustained competitiveness was found to depend upon the extent to which de-centralized business units enact ongoing processes of localized learning. Once local business units have become embedded within localized markets the essential challenge is how to encourage continued entrepreneurial behaviour while maintaining a degree of centralized control and coordination. Of key importance is the recognition that the capabilities that make companies competitive transcend organizational boundaries such that they become situated within complex networks of relational ties.
Resumo:
Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.
Resumo:
A multi-layered architecture of self-organizing neural networks is being developed as part of an intelligent alarm processor to analyse a stream of power grid fault messages and provide a suggested diagnosis of the fault location. Feedback concerning the accuracy of the diagnosis is provided by an object-oriented grid simulator which acts as an external supervisor to the learning system. The utilization of artificial neural networks within this environment should result in a powerful generic alarm processor which will not require extensive training by a human expert to produce accurate results.
Resumo:
The authors describe a learning classifier system (LCS) which employs genetic algorithms (GA) for adaptive online diagnosis of power transmission network faults. The system monitors switchgear indications produced by a transmission network, reporting fault diagnoses on any patterns indicative of faulted components. The system evaluates the accuracy of diagnoses via a fault simulator developed by National Grid Co. and adapts to reflect the current network topology by use of genetic algorithms.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.