624 resultados para Learning Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Series: "Advances in intelligent systems and computing , ISSN 2194-5357, vol. 417"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências da Saúde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vocational Education and Training (VET) is a continuous long-term process of economic, organisational and personal development. It envisions the construction of dynamic skills to improve performance, productivity and organisational, personal and social development. This article focuses on generating skills. It frames training as a process of work-linked training and as a primary source for generating skills whilst seeking to boost creativity. It sheds light upon the discussion pertaining to learning transfer as a necessary condition to structure performance and competitiveness. It highlights the Learning Transfer System Inventory (LTSI), because it allows to measure the effectiveness of training and it identifies the organisations' weaknesses. The data used were collected from the Eurostat Database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the classical Bertrand model when consumers exhibit some strategic behavior in deciding from which seller they will buy. We use two related but different tools. Both consider a probabilistic learning (or evolutionary) mechanism, and in the two of them consumers' behavior in uences the competition between the sellers. The results obtained show that, in general, developing some sort of loyalty is a good strategy for the buyers as it works in their best interest. First, we consider a learning procedure described by a deterministic dynamic system and, using strong simplifying assumptions, we can produce a description of the process behavior. Second, we use nite automata to represent the strategies played by the agents and an adaptive process based on genetic algorithms to simulate the stochastic process of learning. By doing so we can relax some of the strong assumptions used in the rst approach and still obtain the same basic results. It is suggested that the limitations of the rst approach (analytical) provide a good motivation for the second approach (Agent-Based). Indeed, although both approaches address the same problem, the use of Agent-Based computational techniques allows us to relax hypothesis and overcome the limitations of the analytical approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The long term goal of this research is to develop a program able to produce an automatic segmentation and categorization of textual sequences into discourse types. In this preliminary contribution, we present the construction of an algorithm which takes a segmented text as input and attempts to produce a categorization of sequences, such as narrative, argumentative, descriptive and so on. Also, this work aims at investigating a possible convergence between the typological approach developed in particular in the field of text and discourse analysis in French by Adam (2008) and Bronckart (1997) and unsupervised statistical learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locating new wind farms is of crucial importance for energy policies of the next decade. To select the new location, an accurate picture of the wind fields is necessary. However, characterizing wind fields is a difficult task, since the phenomenon is highly nonlinear and related to complex topographical features. In this paper, we propose both a nonparametric model to estimate wind speed at different time instants and a procedure to discover underrepresented topographic conditions, where new measuring stations could be added. Compared to space filling techniques, this last approach privileges optimization of the output space, thus locating new potential measuring sites through the uncertainty of the model itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These notes try to clarify some discussions on the formulation of individual intertemporal behavior under adaptive learning in representative agent models. First, we discuss two suggested approaches and related issues in the context of a simple consumption-saving model. Second, we show that the analysis of learning in the NewKeynesian monetary policy model based on “Euler equations” provides a consistent and valid approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incorporating adaptive learning into macroeconomics requires assumptions about how agents incorporate their forecasts into their decision-making. We develop a theory of bounded rationality that we call finite-horizon learning. This approach generalizes the two existing benchmarks in the literature: Eulerequation learning, which assumes that consumption decisions are made to satisfy the one-step-ahead perceived Euler equation; and infinite-horizon learning, in which consumption today is determined optimally from an infinite-horizon optimization problem with given beliefs. In our approach, agents hold a finite forecasting/planning horizon. We find for the Ramsey model that the unique rational expectations equilibrium is E-stable at all horizons. However, transitional dynamics can differ significantly depending upon the horizon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies optimal monetary policy in a framework that explicitly accounts for policymakers' uncertainty about the channels of transmission of oil prices into the economy. More specfically, I examine the robust response to the real price of oil that US monetary authorities would have been recommended to implement in the period 1970 2009; had they used the approach proposed by Cogley and Sargent (2005b) to incorporate model uncertainty and learning into policy decisions. In this context, I investigate the extent to which regulator' changing beliefs over different models of the economy play a role in the policy selection process. The main conclusion of this work is that, in the specific environment under analysis, one of the underlying models dominates the optimal interest rate response to oil prices. This result persists even when alternative assumptions on the model's priors change the pattern of the relative posterior probabilities, and can thus be attributed to the presence of model uncertainty itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research project is an attempt to give arguments in favour of using cooperative learning activities in FL classrooms as an effective approach to learning. The arguments offered are presented from two different perspectives: the first one is based on the empirical study of three students working together to achieve a common goal. The second one is a compilation of the trainee teacher's experiences during her practicum periods in a high school regarding group work. This part is illustrated by some examples that emphasize that cooperative learning can facilitate learning, promote socialisation and increase students' self-esteem

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: A form of education called Interprofessional Education (IPE) occurs when two or more professions learn with, from and about each other. The purpose of IPE is to improve collaboration and the quality of care. Today, IPE is considered as a key educational approach for students in the health professions. IPE is highly effective when delivered in active patient care, such as in clinical placements. General internal medicine (GIM) is a core discipline where hospital-based clinical placements are mandatory for students in many health professions. However, few interprofessional (IP) clinical placements in GIM have been implemented. We designed such a placement. Placement design: The placement took place in the Department of Internal Medicine at the CHUV. It involved students from nursing, physiotherapy and medicine. The students were in their last year before graduation. Students formed teams consisting of one student from each profession. Each team worked in the same unit and had to take care of the same patient. The placement lasted three weeks. It included formal IP sessions, the most important being facilitated discussions or "briefings" (3x/w) during which the students discussed patient care and management. Four teams of students eventually took part in this project. Method: We performed a type of evaluation research called formative evaluation. This aimed at (1) understanding the educational experience and (2) assessing the impact of the placement on student learning. We collected quantitative data with pre-post clerkship questionnaires. We also collected qualitative data with two Focus Groups (FG) discussions at the end of the placement. The FG were audiotaped and transcribed. A thematic analysis was then performed. Results: We focused on the qualitative data, since the quantitative data lacked of statistical power due to the small numbers of students (N = 11). Five themes emerged from the FG analysis: (1) Learning of others' roles, (2) Learning collaborative competences, (3) Striking a balance between acquiring one's own professional competences and interprofessional competences, (4) Barriers to apply learnt IP competences in the future and (5) Advantages and disadvantages of IP briefings. Conclusions: Our IP clinical placement in GIM appeared to help students learn other professionals' roles and collaborative skills. Some challenges (e.g. finding the same patient for each team) were identified and will require adjustments.