615 resultados para Learning space design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este artículo planteamos una aproximación a la idea de espacio público como articulador del conjunto de acontecimientos que intervienen en la vida de las ciudades. Entendemos este fenómeno como una red poliédrica y multidimensional, cuyo estudio pasa por el análisis de diversas problemáticas: la identificación de los límites entre espacio público y esfera pública; la conformación del espacio público construido; la aproximación teórica al fenómeno desde la contemporaneidad; la dimensión social del espacio público; y finalmente la perspectiva de la gestión de las ciudades. Todas estas dimensiones expresan una mirada crítica del objeto i ponen en valor niveles de trabajo interdisciplinar y multiescala, fundamentales para entender e intervenir en el espacio público de la ciudad contemporánea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two spatial tasks were designed to test specific properties of spatial representation in rats. In the first task, rats were trained to locate an escape hole at a fixed position in a visually homogeneous arena. This arena was connected with a periphery where a full view of the room environment existed. Therefore, rats were dependent on their memory trace of the previous position in the periphery to discriminate a position within the central region. Under these experimental conditions, the test animals showed a significant discrimination of the training position without a specific local view. In the second task, rats were trained in a radial maze consisting of tunnels that were transparent at their distal ends only. Because the central part of the maze was non-transparent, rats had to plan and execute appropriate trajectories without specific visual feedback from the environment. This situation was intended to encourage the reliance on prospective memory of the non-visited arms in selecting the following move. Our results show that acquisition performance was only slightly decreased compared to that shown in a completely transparent maze and considerably higher than in a translucent maze or in darkness. These two series of experiments indicate (1) that rats can learn about the relative position of different places with no common visual panorama, and (2) that they are able to plan and execute a sequence of visits to several places without direct visual feed-back about their relative position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este artículo planteamos una aproximación a la idea de espacio público como articulador del conjunto de acontecimientos que intervienen en la vida de las ciudades. Entendemos este fenómeno como una red poliédrica y multidimensional, cuyo estudio pasa por el análisis de diversas problemáticas: la identificación de los límites entre espacio público y esfera pública; la conformación del espacio público construido; la aproximación teórica al fenómeno desde la contemporaneidad; la dimensión social del espacio público; y finalmente la perspectiva de la gestión de las ciudades. Todas estas dimensiones expresan una mirada crítica del objeto i ponen en valor niveles de trabajo interdisciplinar y multiescala, fundamentales para entender e intervenir en el espacio público de la ciudad contemporánea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This communication is part of a larger teaching innovation project financed by the University ofBarcelona, whose objective is to develop and evaluate transversal competences of the UB, learningability and responsibility. The competence is divided into several sub-competencies being the ability toanalyze and synthesis the most intensely worked in the first year. The work presented here part fromthe results obtained in phase 1 and 2 previously implemented in other subjects (Mathematics andHistory) in the first year of the degree of Business Administration Degree. In these subjects’ previousexperiences there were deficiencies in the acquisition of learning skills by the students. The work inthe subject of Mathematics facilitated that students become aware of the deficit. The work on thesubject of History insisted on developing readings schemes and with the practical exercises wassought to go deeply in the development of this competence.The third phase presented here is developed in the framework of the second year degree, in the WorldEconomy subject. The objective of this phase is the development and evaluation of the same crosscompetence of the previous phases, from a practice that includes both, quantitative analysis andcritical reflection. Specifically the practice focuses on the study of the dynamic relationship betweeneconomic growth and the dynamics in the distribution of wealth. The activity design as well as theselection of materials to make it, has been directed to address gaps in the ability to analyze andsynthesize detected in the subjects of the first year in the previous phases of the project.The realization of the practical case is considered adequate methodology to improve the acquisition ofcompetence of the students, then it is also proposed how to evaluate the acquisition of suchcompetence. The practice is evaluated based on a rubric developed in the framework of the projectobjectives. Thus at the end of phase 3 we can analyze the process that have followed the students,detect where they have had major difficulties and identify those aspects of teaching that can help toimprove the acquisition of skills by the students. The interest of this phase resides in the possibility tovalue whether tracing of learning through competences, organized in a collaborative way, is a goodtool to develop the acquisition of these skills and facilitate their evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Space Manifold Dynamics (SMD) has been proposed for encompassing the various applications of Dynamical Systems methods to spacecraft mission analysis and design, ranging from the exploitation of libration orbits around the collinear Lagrangian points to the design of optimal station-keeping and eclipse avoidance manoeuvres or the determination of low energy lunar and interplanetary transfers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Space Manifold Dynamics (SMD) has been proposed for encompassing the various applications of Dynamical Systems methods to spacecraft mission analysis and design, ranging from the exploitation of libration orbits around the collinear Lagrangian points to the design of optimal station-keeping and eclipse avoidance manoeuvres or the determination of low energy lunar and interplanetary transfers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study reports a set of forty proteinogenic histidine-containing dipeptides as potential carbonyl quenchers. The peptides were chosen to cover as exhaustively as possible the accessible chemical space, and their quenching activities toward 4-hydroxy-2-nonenal (HNE) and pyridoxal were evaluated by HPLC analyses. The peptides were capped at the C-terminus as methyl esters or amides to favor their resistance to proteolysis and diastereoisomeric pairs were considered to reveal the influence of configuration on quenching. On average, the examined dipeptides are less active than the parent compound carnosine (βAla + His) thus emphasizing the unfavorable effect of the shortening of the βAla residue as confirmed by the control dipeptide Gly-His. Nevertheless, some peptides show promising activities toward HNE combined with a remarkable selectivity. The results emphasize the beneficial role of aromatic and positively charged residues, while negatively charged and H-bonding side chains show a detrimental effect on quenching. As a trend, ester derivatives are slightly more active than amides while heterochiral peptides are more active than their homochiral diastereoisomer. Overall, the results reveal that quenching activity strongly depends on conformational effects and vicinal residues (as evidenced by the reported QSAR analysis), offering insightful clues for the design of improved carbonyl quenchers and to rationalize the specific reactivity of histidine residues within proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the experimental results and evaluation of the SmartBox stimulation device in P2P e-learning system which is based on JXTA-Overlay. We also show the design and implementation of the SmartBox environment that is used for stimulating the learners motivation to increase the learning efficiency. The SmartBox is integrated with our P2P system as a useful tool for monitoring and controlling learners¿ activity. We found by experimental results that the SmartBox is an effective way to increase the learner¿s concentration. We also investigated the relation between learner¿s body movement, concentration, and amount of study. From the experimental results, we conclude that the use of SmartBox is an effective way to stimulate the learners in order to continue studying while maintaining the concentration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cooperative transmission can be seen as a "virtual" MIMO system, where themultiple transmit antennas are in fact implemented distributed by the antennas both at the source and the relay terminal. Depending on the system design, diversity/multiplexing gainsare achievable. This design involves the definition of the type of retransmission (incrementalredundancy, repetition coding), the design of the distributed space-time codes, the errorcorrecting scheme, the operation of the relay (decode&forward or amplify&forward) and thenumber of antennas at each terminal. Proposed schemes are evaluated in different conditionsin combination with forward error correcting codes (FEC), both for linear and near-optimum(sphere decoder) receivers, for its possible implementation in downlink high speed packetservices of cellular networks. Results show the benefits of coded cooperation over directtransmission in terms of increased throughput. It is shown that multiplexing gains areobserved even if the mobile station features a single antenna, provided that cell wide reuse of the relay radio resource is possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local governments need minimum common criteria to manage the social dynamics of diversity. This Handbook defends the strategy of interculturality as a public political approach, based on a way to interpret interculturality as a positive resource, as a public cultural and a collective good. It is an approach that promotes the equitative interaction as a way to generate a cohesive common public space. This Handbook provides the reader with the conceptual and practical instruments to help (and inspire) those territories which would like to integrate interculturality as an urban project.It aims to serve as a ground for discussion to jointly work in local administrations and other government levels, fororganizations and institutions, as well as for cultural, political and citizens collectives. Results are presented asan action by the Red de Ciudades Interculturales (RECI), within the Intercultural Cities framework by the Councilof Europe, with the collaboration of Obra Social "La Caixa".