971 resultados para Inputs supraspinal
Resumo:
In fear conditioning, an animal learns to associate an unconditioned stimulus (US), such as a shock, and a conditioned stimulus (CS), such as a tone, so that the presentation of the CS alone can trigger conditioned responses. Recent research on the lateral amygdala has shown that following cued fear conditioning, only a subset of higher-excitable neurons are recruited in the memory trace. Their selective deletion after fear conditioning results in a selective erasure of the fearful memory. I hypothesize that the recruitment of highly excitable neurons depends on responsiveness to stimuli, intrinsic excitability and local connectivity. In addition, I hypothesize that neurons recruited for an initial memory also participate in subsequent memories, and that changes in neuronal excitability affect secondary fear learning. To address these hypotheses, I will show that A) a rat can learn to associate two successive short-term fearful memories; B) neuronal populations in the LA are competitively recruited in the memory traces depending on individual neuronal advantages, as well as advantages granted by the local network. By performing two successive cued fear conditioning experiments, I found that rats were able to learn and extinguish the two successive short-term memories, when tested 1 hour after learning for each memory. These rats were equipped with a system of stable extracellular recordings that I developed, which allowed to monitor neuronal activity during fear learning. 233 individual putative pyramidal neurons could modulate their firing rate in response to the conditioned tone (conditioned neurons) and/or non- conditioned tones (generalizing neurons). Out of these recorded putative pyramidal neurons 86 (37%) neurons were conditioned to one or both tones. More precisely, one population of neurons encoded for a shared memory while another group of neurons likely encoded the memories' new features. Notably, in spite of a successful behavioral extinction, the firing rate of those conditioned neurons in response to the conditioned tone remained unchanged throughout memory testing. Furthermore, by analyzing the pre-conditioning characteristics of the conditioned neurons, I determined that it was possible to predict neuronal recruitment based on three factors: 1) initial sensitivity to auditory inputs, with tone-sensitive neurons being more easily recruited than tone- insensitive neurons; 2) baseline excitability levels, with more highly excitable neurons being more likely to become conditioned; and 3) the number of afferent connections received from local neurons, with neurons destined to become conditioned receiving more connections than non-conditioned neurons. - En conditionnement de la peur, un animal apprend à associer un stimulus inconditionnel (SI), tel un choc électrique, et un stimulus conditionné (SC), comme un son, de sorte que la présentation du SC seul suffit pour déclencher des réflexes conditionnés. Des recherches récentes sur l'amygdale latérale (AL) ont montré que, suite au conditionnement à la peur, seul un sous-ensemble de neurones plus excitables sont recrutés pour constituer la trace mnésique. Pour apprendre à associer deux sons au même SI, je fais l'hypothèse que les neurones entrent en compétition afin d'être sélectionnés lors du recrutement pour coder la trace mnésique. Ce recrutement dépendrait d'un part à une activation facilité des neurones ainsi qu'une activation facilité de réseaux de neurones locaux. En outre, je fais l'hypothèse que l'activation de ces réseaux de l'AL, en soi, est suffisante pour induire une mémoire effrayante. Pour répondre à ces hypothèses, je vais montrer que A) selon un processus de mémoire à court terme, un rat peut apprendre à associer deux mémoires effrayantes apprises successivement; B) des populations neuronales dans l'AL sont compétitivement recrutées dans les traces mnésiques en fonction des avantages neuronaux individuels, ainsi que les avantages consentis par le réseau local. En effectuant deux expériences successives de conditionnement à la peur, des rats étaient capables d'apprendre, ainsi que de subir un processus d'extinction, pour les deux souvenirs effrayants. La mesure de l'efficacité du conditionnement à la peur a été effectuée 1 heure après l'apprentissage pour chaque souvenir. Ces rats ont été équipés d'un système d'enregistrements extracellulaires stables que j'ai développé, ce qui a permis de suivre l'activité neuronale pendant l'apprentissage de la peur. 233 neurones pyramidaux individuels pouvaient moduler leur taux d'activité en réponse au son conditionné (neurones conditionnés) et/ou au son non conditionné (neurones généralisant). Sur les 233 neurones pyramidaux putatifs enregistrés 86 (37%) d'entre eux ont été conditionnés à un ou deux tons. Plus précisément, une population de neurones code conjointement pour un souvenir partagé, alors qu'un groupe de neurones différent code pour de nouvelles caractéristiques de nouveaux souvenirs. En particulier, en dépit d'une extinction du comportement réussie, le taux de décharge de ces neurones conditionné en réponse à la tonalité conditionnée est resté inchangée tout au long de la mesure d'apprentissage. En outre, en analysant les caractéristiques de pré-conditionnement des neurones conditionnés, j'ai déterminé qu'il était possible de prévoir le recrutement neuronal basé sur trois facteurs : 1) la sensibilité initiale aux entrées auditives, avec les neurones sensibles aux sons étant plus facilement recrutés que les neurones ne répondant pas aux stimuli auditifs; 2) les niveaux d'excitabilité des neurones, avec les neurones plus facilement excitables étant plus susceptibles d'être conditionnés au son ; et 3) le nombre de connexions reçues, puisque les neurones conditionné reçoivent plus de connexions que les neurones non-conditionnés. Enfin, nous avons constaté qu'il était possible de remplacer de façon satisfaisante le SI lors d'un conditionnement à la peur par des injections bilatérales de bicuculline, un antagoniste des récepteurs de l'acide y-Aminobutirique.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
Este manual se ha solicitado como parte del encargo que Iberdrola realizó al IRTA dentro del proyecto SOST-CO2, financiado por el programa CENIT (Consorcios Estratégicos Nacionales en Investigación Técnica), que forma parte del Programa Ingenio 2010, un proyecto del Gobierno español para incrementar la inversión en I+D, tanto pública como privada, con el objetivo de alcanzar en 2010 el 2% del PIB. El proyecto SOST-CO2 tiene como objetivo abordar el ciclo de vida completo del CO2, desde su captura en las fuentes de emisión pasando por su transporte, su almacenamiento y su valorización a gran escala. Se pretende enlazar la captura del CO2 con su posterior revalorización, buscando así una alternativa sostenible al mero confinamiento geológico de las emisiones. En el contexto del proyecto SOST-CO2, Iberdrola se planteó la utilización directa en horticultura intensiva de gases de combustión procedentes de sus plantas de ciclo combinado que utilizan gas natural como combustible, dado su contenido enriquecido en CO2, y encargó al IRTA su estudio. El objetivo general era, a parte de una importante fijación de carbono por parte de los cultivos, un elevado nivel de sostenibilidad ambiental por reducción de los inputs energéticos empleados en la climatización de los invernaderos y en la depuración de los gases. El trabajo se desarrolló a lo largo de cuatro años (2008-2011) en diversos escenarios experimentales. De las diferentes facetas que abarcó este trabajo, se presentan aquí las más directamente relacionadas con el principal resultado del estudio, es decir, la adecuación de los gases de combustión mencionados para obtener una mejora en la producción y rendimiento por su aplicación en horticultura intensiva.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Disseny d'un programari de gestió de magatzems on quedin reflectides les seves entrades, sortides i altres operacions pròpies dels magatzems. El programari ha de ser escalable i perdurar en el temps a més a més de permetre operacions d¿actualització, esborrat, addicció de dades i les operacionsfonamentals de consulta.
Resumo:
The spontaneous activity of the brain shows different features at different scales. On one hand, neuroimaging studies show that long-range correlations are highly structured in spatiotemporal patterns, known as resting-state networks, on the other hand, neurophysiological reports show that short-range correlations between neighboring neurons are low, despite a large amount of shared presynaptic inputs. Different dynamical mechanisms of local decorrelation have been proposed, among which is feedback inhibition. Here, we investigated the effect of locally regulating the feedback inhibition on the global dynamics of a large-scale brain model, in which the long-range connections are given by diffusion imaging data of human subjects. We used simulations and analytical methods to show that locally constraining the feedback inhibition to compensate for the excess of long-range excitatory connectivity, to preserve the asynchronous state, crucially changes the characteristics of the emergent resting and evoked activity. First, it significantly improves the model's prediction of the empirical human functional connectivity. Second, relaxing this constraint leads to an unrealistic network evoked activity, with systematic coactivation of cortical areas which are components of the default-mode network, whereas regulation of feedback inhibition prevents this. Finally, information theoretic analysis shows that regulation of the local feedback inhibition increases both the entropy and the Fisher information of the network evoked responses. Hence, it enhances the information capacity and the discrimination accuracy of the global network. In conclusion, the local excitation-inhibition ratio impacts the structure of the spontaneous activity and the information transmission at the large-scale brain level.
Resumo:
Ce guide présente la méthode Data Envelopment Analysis (DEA), une méthode d'évaluation de la performance . Il est destiné aux responsables d'organisations publiques qui ne sont pas familiers avec les notions d'optimisation mathématique, autrement dit de recherche opérationnelle. L'utilisation des mathématiques est par conséquent réduite au minimum. Ce guide est fortement orienté vers la pratique. Il permet aux décideurs de réaliser leurs propres analyses d'efficience et d'interpréter facilement les résultats obtenus. La méthode DEA est un outil d'analyse et d'aide à la décision dans les domaines suivants : - en calculant un score d'efficience, elle indique si une organisation dispose d'une marge d'amélioration ; - en fixant des valeurs-cibles, elle indique de combien les inputs doivent être réduits et les outputs augmentés pour qu'une organisation devienne efficiente ; - en identifiant le type de rendements d'échelle, elle indique si une organisation doit augmenter ou au contraire réduire sa taille pour minimiser son coût moyen de production ; - en identifiant les pairs de référence, elle désigne quelles organisations disposent des best practice à analyser.
Resumo:
A differentiated reconstruction of palaeolimnologic, -environmental, and -climatic conditions is presented for the Middle Miocene long-term freshwater lake (14.3 to 13.5 Ma) of the Steinheim basin, on the basis of a combined C, 0, and Sr isotope study of sympatric skeletal fossils of aquatic and terrestrial organisms from the lake sediments. The oxygen isotope composition for lake water of the Steinheim basin (delta O-18(H2O) = +2.0 +/- 0.4 parts per thousand VSMOW, n = 6) was reconstructed from measurements of delta O-18(PO4) of aquatic turtle bones. The drinking water calculated from the enamel of large mammals (proboscideans, rhinocerotids, equids, cervids, suids) has delta O-18(H2O) values (delta(OH2O)-O-18 = -5.9 +/- 1.7 parts per thousand VSMOW, n = 31) typical for Middle Miocene meteoric water of the area. This delta O-18(H2O) value corresponds to a mean annual air temperature (MAT) of 18.8 +/- 3.8 degrees C, calculated using a modem-day delta(OH2O)-O-18-MAT relation. Hence, large mammals did not use the lake water as principal drinking water. In contrast, small mammals, especially the then abundant pika Prolagus oeningensis drank from O-18-enriched water sources (delta O-18(H2O) = +2.7 +/- 2.3 parts per thousand VSMOW, n = 7), such as the lake water. Differences in Sr and 0 isotopic compositions between large and small mammal teeth indicate different home ranges and drinking behaviour and support migration of some large mammals between the Swabian Alb plateau and the nearby Molasse basin, while small mammals ingested their food and water locally. Changes in the lake level, water chemistry, and temperature were inferred using isotopic compositions of ostracod and gastropod shells from a composite lake sediment profile. Calcitic ostracod valves (Ilyocypris binocularis; delta O-18 = +1.7 +/- 1.2 parts per thousand VPDB, delta C-18 = -0.5 +/- 0.9 parts per thousand, VPDB, n = 68) and aragonitic, gastropod shells (Gyraulus spp.; delta O-18 = +2.0 +/- 13 parts per thousand VPDB, delta C-13 = -1.1 +/- 1.3 parts per thousand VPDB, n = 89) have delta O-18 and delta C-13 values similar to or even higher than those of marine, carbonates. delta C-13 values:of the biogenic carbonates parallel lake level fluctuations while delta O-18 values scatter around +2 +/- 2 parts per thousand and reflect the short term variability of meteoric water inflow vs. longer term evaporation. Sr-87/Sr-86 ratios of aragonitic Gyraulus spp. gastropod shells parallel the lake level fluctuations, reflecting variable inputs of groundwater and surface waters. Using a water delta O-18(H2O) value of +2.0 parts per thousand VSMOW, water temperatures calculated from skeletal tissue delta O-18 values of ostracods are 16.7 +/- 5.0 degrees C, gastropods 20.6 +/- 5.6 degrees C, otoliths 21.8 +/- 1.4 degrees C, and fish teeth 17.0 +/- 2.7 degrees C. The calculated MAT (similar to 19 degrees C), lake water temperatures (similar to 17 to 22 degrees C) and the O-18-enriched water compositions are indicative of warm-temperate climatic conditions, possibly with a high humidity during this period. Vegetation in the area surrounding the basin was largely of the C-3-type, as indicated by carbon isotopic compositions of tooth enamel from large mammals (delta C-13 = -11.1 +/- 1.1 parts per thousand VPDB, n = 40). (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: High blood pressure, blood glucose, serum cholesterol, and BMI are risk factors for cardiovascular diseases and some of these factors also increase the risk of chronic kidney disease and diabetes. We estimated mortality from cardiovascular diseases, chronic kidney disease, and diabetes that was attributable to these four cardiometabolic risk factors for all countries and regions from 1980 to 2010. METHODS: We used data for exposure to risk factors by country, age group, and sex from pooled analyses of population-based health surveys. We obtained relative risks for the effects of risk factors on cause-specific mortality from meta-analyses of large prospective studies. We calculated the population attributable fractions for each risk factor alone, and for the combination of all risk factors, accounting for multicausality and for mediation of the effects of BMI by the other three risks. We calculated attributable deaths by multiplying the cause-specific population attributable fractions by the number of disease-specific deaths. We obtained cause-specific mortality from the Global Burden of Diseases, Injuries, and Risk Factors 2010 Study. We propagated the uncertainties of all the inputs to the final estimates. FINDINGS: In 2010, high blood pressure was the leading risk factor for deaths due to cardiovascular diseases, chronic kidney disease, and diabetes in every region, causing more than 40% of worldwide deaths from these diseases; high BMI and glucose were each responsible for about 15% of deaths, and high cholesterol for more than 10%. After accounting for multicausality, 63% (10·8 million deaths, 95% CI 10·1-11·5) of deaths from these diseases in 2010 were attributable to the combined effect of these four metabolic risk factors, compared with 67% (7·1 million deaths, 6·6-7·6) in 1980. The mortality burden of high BMI and glucose nearly doubled from 1980 to 2010. At the country level, age-standardised death rates from these diseases attributable to the combined effects of these four risk factors surpassed 925 deaths per 100 000 for men in Belarus, Kazakhstan, and Mongolia, but were less than 130 deaths per 100 000 for women and less than 200 for men in some high-income countries including Australia, Canada, France, Japan, the Netherlands, Singapore, South Korea, and Spain. INTERPRETATION: The salient features of the cardiometabolic disease and risk factor epidemic at the beginning of the 21st century are high blood pressure and an increasing effect of obesity and diabetes. The mortality burden of cardiometabolic risk factors has shifted from high-income to low-income and middle-income countries. Lowering cardiometabolic risks through dietary, behavioural, and pharmacological interventions should be a part of the global response to non-communicable diseases. FUNDING: UK Medical Research Council, US National Institutes of Health.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
Introduction: Accurate registration of the relative timing between the occurrence of sensory events on a sub-second time scale is crucial for both sensory-motor and cognitive functions (Mauk and Buonomano, 2004; Habib, 2000). Support for this assumption comes notably from evidence that temporal processing impairments are implicated in a range of neurological and psychiatric conditions (e.g. Buhusi & Meck, 2005). For instance, deficits in fast auditory temporal integration have been regularly put forward as resulting in phonologic discrimination impairments at the basis of speech comprehension deficits characterizing e.g. dyslexia (Habib, 2000). At least two aspects of the brain mechanisms of temporal order judgment remain unknown. First, it is unknown when during the course of stimulus processing a temporal ,,stamp‟ is established to guide TOJ perception. Second, the extent of interplay between the cerebral hemispheres in engendering accurate TOJ performance is unresolved Methods: We investigated the spatiotemporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. Results: AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional de-coupling between homotopic PSR areas. Conclusions: These results support a model of temporal order processing wherein behaviorally relevant temporal information - i.e. a temporal 'stamp'- is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Resumo:
This paper discusses predictive motion control of a MiRoSoT robot. The dynamic model of the robot is deduced by taking into account the whole process - robot, vision, control and transmission systems. Based on the obtained dynamic model, an integrated predictive control algorithm is proposed to position precisely with either stationary or moving obstacle avoidance. This objective is achieved automatically by introducing distant constraints into the open-loop optimization of control inputs. Simulation results demonstrate the feasibility of such control strategy for the deduced dynamic model
Resumo:
RotaTeq® (Merck & Company, Inc, Whitehouse Station, NJ, USA) is an oral pentavalent rotavirus vaccine (RV5) that has shown high and consistent efficacy in preventing rotavirus gastroenteritis (RGE) in randomised clinical trials previously conducted in industrialised countries with high medical care resources. To date, the efficacy and effectiveness data for RV5 are available in some Latin American countries, but not Brazil. In this analysis, we projected the effectiveness of RV5 in terms of the percentage reduction in RGE-related hospitalisations among children less than five years of age in four regions of Brazil, using a previously validated mathematical model. The model inputs included hospital-based rotavirus surveillance data from Goiânia, Porto Alegre, Salvador and São Paulo from 2005-2006, which provided the proportions of rotavirus attributable to serotypes G1, G2, G3, G4 and G9, and published rotavirus serotype-specific efficacy from the Rotavirus Efficacy and Safety Trial. The model projected an overall percentage reduction of 93% in RGE-related hospitalisations, with an estimated annual reduction in RGE-related hospitalisations between 42,991-77,383 in the four combined regions of Brazil. These results suggest that RV5 could substantially prevent RGE-related hospitalisations in Brazil.
Resumo:
The chemical and isotopic compositions (deltaD(H2O), delta(18)O(H2O), delta(18)O(CO2), delta(13)C(CO2), delta(34)S, and He/N-2 and He/Ar ratios) of fumarolic gases from Nisyros, Greece, indicate that both arc-type magmatic water and local seawater feed the hydrothermal system. Isotopic composition of the deep fluid is estimated to be +4.9+/-0.5parts per thousand for delta(18)O and -11+/-5parts per thousand for deltaD corresponding to a magmatic water fraction of 0.7. Interpretation of the stable water isotopes was based on liquid-vapor separation conditions obtained through gas geothermometry. The H-2-Ar, H-2-N-2, and H-2-H2O geothermometers suggest reservoir temperatures of 345+/-15 degreesC, in agreement with temperatures measured in deep geothermal wells, whereas a vapor/liquid separation temperature of 260+/-30 degreesC is indicated by gas equilibria in the H2O-H-2-CO2-CO-CH4 system. The largest magmatic inputs seem to occur below the Stephanos-Polybotes Micros crater, whereas the marginal fumarolic areas of Phlegeton-Polybotes Megalos craters receive a smaller contribution of magmatic gases.
Resumo:
The goal of this paper is twofold: first, we aim to assess the role played by inventors’ cross-regional mobility and networks of collaboration in fostering knowledge diffusion across regions and subsequent innovation. Second, we intend to evaluate the feasibility of using mobility and networks information to build cross-regional interaction matrices to be used within the spatial econometrics toolbox. To do so, we depart from a knowledge production function where regional innovation intensity is a function not only of the own regional innovation inputs but also external accessible R&D gained through interregional interactions. Differently from much of the previous literature, cross-section gravity models of mobility and networks are estimated to use the fitted values to build our ‘spatial’ weights matrices, which characterize the intensity of knowledge interactions across a panel of 269 regions covering most European countries over 6 years.