38 resultados para raccomandazione e-learning privacy tecnica rule-based recommender suggerimento
Resumo:
Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.
Resumo:
Land degradation as well as land conservation maps at a (sub-) national scale are critical for pro-ject planning for sustainable land management. It has long been recognized that online accessible and low-cost raster data sets (e.g. Landsat imagery, SRTM-DEM’s) provide a readily available basis for land resource assessments for developing countries. However, choice of spatial, tempo-ral and spectral resolution of such data is often limited. Furthermore, while local expert knowl-edge on land degradation processes is abundant, difficulties are often encountered when linking existing knowledge with modern approaches including GIS and RS. The aim of this study was to develop an easily applicable, standardized workflow for preliminary spatial assessments of land degradation and conservation, which also allows the integration of existing expert knowledge. The core of the developed method consists of a workflow for rule-based land resource assess-ment. In a systematic way, this workflow leads from predefined land degradation and conserva-tion classes to field indicators, to suitable spatial proxy data, and finally to a set of rules for clas-sification of spatial datasets. Pre-conditions are used to narrow the area of interest. Decision tree models are used for integrating the different rules. It can be concluded that the workflow presented assists experts from different disciplines in col-laboration GIS/RS specialists in establishing a preliminary model for assessing land degradation and conservation in a spatially explicit manner. The workflow provides support when linking field indicators and spatial datasets, and when determining field indicators for groundtruthing.
Resumo:
The presented approach describes a model for a rule-based expert system calculating the temporal variability of the release of wet snow avalanches, using the assumption of avalanche triggering without the loading of new snow. The knowledge base of the model is created by using investigations on the system behaviour of wet snow avalanches in the Italian Ortles Alps, and is represented by a fuzzy logic rule-base. Input parameters of the expert system are numerical and linguistic variables, measurable meteorological and topographical factors and observable characteristics of the snow cover. Output of the inference method is the quantified release disposition for wet snow avalanches. Combining topographical parameters and the spatial interpolation of the calculated release disposition a hazard index map is dynamically generated. Furthermore, the spatial and temporal variability of damage potential on roads exposed to wet snow avalanches can be quantified, expressed by the number of persons at risk. The application of the rule base to the available data in the study area generated plausible results. The study demonstrates the potential for the application of expert systems and fuzzy logic in the field of natural hazard monitoring and risk management.
Resumo:
Location-awareness indoors will be an inseparable feature of mobile services/applications in future wireless networks. Its current ubiquitous availability is still obstructed by technological challenges and privacy issues. We propose an innovative approach towards the concept of indoor positioning with main goal to develop a system that is self-learning and able to adapt to various radio propagation environments. The approach combines estimation of propagation conditions, subsequent appropriate channel modelling and optimisation feedback to the used positioning algorithm. Main advantages of the proposal are decreased system set-up effort, automatic re-calibration and increased precision.
Resumo:
We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is given about how learning performance depends on population size and task complexity. Next, we extend the basic model to n-ary decision making and show that it can also be used in conjunction with other population codes such as rate or even latency coding.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
This publication offers concrete suggestions for implementing an integrative and learning-oriented approach to agricultural extension with the goal of fostering sustainable development. It targets governmental and non-governmental organisations, development agencies, and extension staff working in the field of rural development. The book looks into the conditions and trends that influence extension today, and outlines new challenges and necessary adaptations. It offers a basic reflection on the goals, the criteria for success and the form of a state-of-the-art approach to extension. The core of the book consists of a presentation of Learning for Sustainability (LforS), an example of an integrative, learning-oriented approach that is based on three crucial elements: stakeholder dialogue, knowledge management, and organizational development. Awareness raising and capacity building, social mobilization, and monitoring & evaluation are additional building blocks. The structure and organisation of the LforS approach as well as a selection of appropriate methods and tools are presented. The authors also address key aspects of developing and managing a learning-oriented extension approach. The book illustrates how LforS can be implemented by presenting two case studies, one from Madagascar and one from Mongolia. It addresses conceptual questions and at the same time it is practice-oriented. In contrast to other extension approaches, LforS does not limit its focus to production-related aspects and the development of value chains: it also addresses livelihood issues in a broad sense. With its focus on learning processes LforS seeks to create a better understanding of the links between different spheres and different levels of decision-making; it also seeks to foster integration of the different actors’ perspectives.
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Resumo:
Given the complex structure of the brain, how can synaptic plasticity explain the learning and forgetting of associations when these are continuously changing? We address this question by studying different reinforcement learning rules in a multilayer network in order to reproduce monkey behavior in a visuomotor association task. Our model can only reproduce the learning performance of the monkey if the synaptic modifications depend on the pre- and postsynaptic activity, and if the intrinsic level of stochasticity is low. This favored learning rule is based on reward modulated Hebbian synaptic plasticity and shows the interesting feature that the learning performance does not substantially degrade when adding layers to the network, even for a complex problem.