869 resultados para Error-correcting codes (Information theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena on tarkastella tekijöitä, joista ydinosaaminen muodostuu, sekä sitä kuinka yritykset voisivat parhaiten hyödyntää omia resurssejaan ja osaamistaan tunnistetun ydinosaamisen avulla. Teoria osuudessa käydään läpi kuinka ydinosaaminen on kirjallisuudessa määritelty ja miten yritykset voivat sen määritellä sisäisesti itselleen. Empiirisessä osiossa käydään läpi Telecom Business Research Centerissä tehdyn kvantitatiivisen selvityksen pohjalta valitut kolme sisällöntuottaja case - yritystä sekä kuvataan näiden osaamista. Tiedot yrityksistä perustuvat niiden edustajille tehtyihin haastatteluihin ja heidän käsitykseensä omasta yrityksestään. Tämä näkemys on tutkimuksen kannalta äärimmäisen relevanttia, koska ydinosaamisen määrittely tehdään yrityksessä sisäisesti juuri haastatellun kaltaisten yrityksen ydintoimijoiden toimesta. Varsinaisten case -yritysten lisäksi käydään läpi käytännön tapaus action-oriented -tutkimusosuudessa. Tutkimusta ja siinä käsiteltyjä esimerkkejä tulisi hyödyntää yrityksen oman ydinosaamisselvityksen apuna prosessin varrella.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this work we analyze the behavior of complex information in Fresnel domain taking into account the limited capability to display complex transmittance values of current liquid crystal devices, when used as holographic displays. In order to do this analysis we compute the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution (real and imaginary parts, amplitude and phase) as well as using the full complex information adjusted with a method that combines two configurations of the devices in an adding architecture. The RMS error between the amplitude of these reconstructions and the original amplitude is used to evaluate the quality of the information displayed. The results of the error analysis show different behavior for the reconstructions using the different parts of the complex distribution and using the combined method of two devices. Better reconstructions are obtained when using two devices whose configurations densely cover the complex plane when they are added. Simulated and experimental results are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laatu on osaltaan vahvistamassa asemaansa liike-elämässä yritysten kilpaillessa kansainvälisillä markkinoilla niin hinnalla kuin laadulla. Tämä suuntaus on synnyttänyt useita laatuohjelmia, joita käytetään ahkerasti yritysten kokonais- valtaisen laatujohtamisen (TQM) toteuttamisessa. Laatujohtaminen kattaa yrityksen kaikki toiminnot ja luo vaatimuksia myös yrityksen tukitoimintojen kehittämiselle ja parantamiselle. Näihin lukeutuu myös tämän tutkimuksen kohde tietohallinto (IT). Tutkielman tavoitteena oli kuvata IT prosessin nykytila. Tutkielmassa laadittu prosessikuvaus pohjautuu prosessijohtamisen teoriaan ja kohdeyrityksen käyttämään laatupalkinto kriteeristöön. Tutkimusmenetelmänä prosessin nykytilan selvittämiseksi käytettiin teemahaastattelutta. Prosessin nykytilan ja sille asetettujen vaatimusten selvittämiseksi haastateltiin IT prosessin asiakkaita. Prosessianalyysi, tärkeimpien ala-prosessien tunnistaminen ja parannusalueiden löytäminen ovat tämän tutkielman keskeisemmät tulokset. Tutkielma painottui IT prosessin heikkouksien ja parannuskohteiden etsimiseen jatkuvan kehittämisen pohjaksi, ei niinkään prosessin radikaaliin uudistamiseen. Tutkielmassa esitellään TQM:n periaatteet, laatutyökaluja sekä prosessijohtamisen terminologia, periaatteet ja sen systemaattinen toteutus. Työ antaa myös kuvan siitä, miten TQM ja prosessijohtaminen niveltyvät yrityksen laatutyössä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this dissertation is to analyse older consumers' adoption of information and communication technology innovations, assess the effect of aging related characteristic, and evaluate older consumers' willingness to apply these technologies in health care services. This topic is considered important, because the population in Finland (as in other welfare states) is aging and thus offers a possibility for marketers, but on the other hand threatens society with increasing costs for healthcare. Innovation adoption has been under research from several aspects in both organizational and consumer research. In the consumer behaviour, several theories have been developed to predict consumer responses to innovation. The present dissertation carefully reviews previous research and takes a closer look at the theory of planned behaviour, technology acceptance model and diffusion of innovations perspective. It is here suggested that there is a possibility that these theories can be combined and complemented to predict the adoption of ICT innovations among aging consumers, taking the aging related personal characteristics into account. In fact, there are very few studies that have concentrated on aging consumers in the innovation research, and thus there was a clear indent for the present research. ICT in the health care context has been studied mainly from the organizational point of view. If the technology is thus applied for the communication between the individual end-user and service provider, the end-user cannot be shrugged off. The present dissertation uses empirical evidence from a survey targeted to 55-79 year old people from one city in Southern-Carelia. The empirical analysis of the research model was mainly based on structural equation modelling that has been found very useful on estimating causal relationships. The tested models were targeted to predict the adoption stage of personal computers and mobile phones, and the adoption intention of future health services that apply these devices for communication. The present dissertation succeeded in modelling the adoption behaviour of mobile phones and PCs as well as adoption intentions of future services. Perceived health status and three components behind it (depression, functional ability, and cognitive ability) were found to influence perception of technology anxiety. Better health leads to less anxiety. The effect of age was assessed as a control variable, in order to evaluate its effect compared to health characteristics. Age influenced technology perceptions, but to lesser extent compared to health. The analyses suggest that the major determinant for current technology adoption is perceived behavioural control, and additionally technology anxiety that indirectly inhibit adoption through perceived control. When focusing on future service intentions, the key issue is perceived usefulness that needs to be highlighted when new services are launched. Besides usefulness, the perception of online service reliability is important and affects the intentions indirectly. To conclude older consumers' adoption behaviour is influenced by health status and age, but also by the perceptions of anxiety and behavioural control. On the other hand, launching new types of health services for aging consumers is possible after the service is perceived reliable and useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this Thesis was to study what is the present situation of Business Intelligence of the company unit. This means how efficiently unit uses possibilities of modern information management systems. The aim was to resolve how operative informa-tion management of unit’s tender process could be improved by modern information technology applications. This makes it possible that tender processes could be faster and more efficiency. At the beginning it was essential to acquaint oneself with written literature of Business Intelligence. Based on Business Intelligence theory is was relatively easy but challenging to search and discern how tender business could be improved by methods of Busi-ness Intelligence. The empirical phase of this study was executed as qualitative research method. This phase includes theme and natural interviews on the company. Problems and challenges of tender process were clarified in a part an empirical phase. Group of challenges were founded when studying information management of company unit. Based on theory and interviews, group of improvements were listed which company could possible do in the future when developing its operative processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoitajien informaatioteknologian hyväksyntä ja käyttö psykiatrisissa sairaaloissa Informaatioteknologian (IT) käyttö ei ole ollut kovin merkittävässä roolissa psykiatrisessa hoitotyössä, vaikka IT sovellusten on todettu vaikuttaneen radikaalisti terveydenhuollon palveluihin ja hoitohenkilökunnan työprosesseihin viime vuosina. Tämän tutkimuksen tavoitteena on kuvata psykiatrisessa hoitotyössä toimivan hoitohenkilökunnan informaatioteknologian hyväksyntää ja käyttöä ja luoda suositus, jonka avulla on mahdollista tukea näitä asioita psykiatrisissa sairaaloissa. Tutkimus koostuu viidestä osatutkimuksesta, joissa on hyödynnetty sekä tilastollisia että laadullisia tutkimusmetodeja. Tutkimusaineistot on kerätty yhdeksän akuuttipsykiatrian osaston hoitohenkilökunnan keskuudessa vuosien 2003-2006 aikana. Technology Acceptance Model (TAM) –teoriaa on hyödynnetty jäsentämään tutkimusprosessia sekä syventämään ymmärrystä saaduista tutkimustuloksista. Tutkimus osoitti kahdeksan keskeistä tekijää, jotka saattavat tukea psykiatrisessa sairaalassa toimivien hoitajien tietoteknologiasovellusten hyväksyntää ja hyödyntämistä, kun nämä tekijät otetaan huomioon uusia sovelluksia käyttöönotettaessa. Tekijät jakautuivat kahteen ryhmään; ulkoiset tekijät (resurssien suuntaaminen, yhteistyö, tietokonetaidot, IT koulutus, sovelluksen käyttöön liittyvä harjoittelu, potilas-hoitaja suhde), sekä käytön helppous ja sovelluksen käytettävyys (käytön ohjeistus, käytettävyyden varmistaminen). TAM teoria todettiin käyttökelpoiseksi tulosten tulkinnassa. Kehitetty suositus sisältää ne toimenpiteet, joiden avulla on mahdollista tukea sekä organisaation johdon että hoitohenkilökunnan sitoutumista ja tätä kautta varmistaa uuden sovelluksen hyväksyntä ja käyttö hoitotyössä. Suositusta on mahdollista hyödyntää käytännössä kun uusia tietojärjestelmiä implementoidaan käyttöön psykiatrisissa sairaaloissa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to find out how project success can be measured in a case where the output of a project is an intangible information product, what kind of framework can be used to evaluate the project success, and how the project assessment can be done in practice. As a case example, the success of a business blueprint project was assessed from the product point of view. A framework for assessing business blueprint project success was made based on a literature review. Furthermore, separate frameworks for measuring information product quality and project costs were developed. The theory of business blueprinting was discovered not to be firmly institutionalized and it is briefly covered in the thesis. The possible net benefits from the strategic business process harmonization were noted to be much more significant than the costs of the business blueprint project. The project was seen as a sufficient success from the viewpoint of the created output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The directional consistency and skew-symmetry statistics have been proposed as global measurements of social reciprocity. Although both measures can be useful for quantifying social reciprocity, researchers need to know whether these estimators are biased in order to assess descriptive results properly. That is, if estimators are biased, researchers should compare actual values with expected values under the specified null hypothesis. Furthermore, standard errors are needed to enable suitable assessment of discrepancies between actual and expected values. This paper aims to derive some exact and approximate expressions in order to obtain bias and standard error values for both estimators for round-robin designs, although the results can also be extended to other reciprocal designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feedback-related negativity (FRN) is an ERP component that distinguishes positive from negative feedback. FRN has been hypothesized to be the product of an error signal that may be used to adjust future behavior. In addition, associative learning models assume that the trial-to-trial learning of cueoutcome mappings involves the minimization of an error term. This study evaluated whether FRN is a possible electrophysiological correlate of this error term in a predictive learning task where human subjects were asked to learn different cueoutcome relationships. Specifically, we evaluated the sensitivity of the FRN to the course of learning when different stimuli interact or compete to become a predictor of certain outcomes. Importantly, some of these cues were blocked by more informative or predictive cues (i.e., the blocking effect). Interestingly, the present results show that both learning and blocking affect the amplitude of the FRN component. Furthermore, independent analyses of positive and negative feedback event-related signals showed that the learning effect was restricted to the ERP component elicited by positive feedback. The blocking test showed differences in the FRN magnitude between a predictive and a blocked cue. Overall, the present results show that ERPs that are related to feedback processing correspond to the main predictions of associative learning models. ■