957 resultados para self-organizing maps
Resumo:
In the 16th century, merchants and bankers gained a social influence and political relevance, due to their capacity of ‘faire travailler l’argent des autres’ (Benassar 1972:50). For the success of their activity, they built evolving networks with cooperative partners. These networks were much more than the sum of all partners. In the case study of the Castilian merchant Simon Ruiz, the network functioned in an unique way and independent from any formal institutional control. Its functioning varied in how different partners were associated and the particular characteristics and contents of these social ties. Being a self-organized network, since the formal institutions of trade regulation and the Crown control didn’t influence the network functioning, the Simon Ruiz network was deeply embedded in the economic and financial performance of the Hispanic Empires, in two different ways. The first, purely commercial. The monopolistic regime which was applied by the two crowns in the trade of certain colonial goods was insufficient to the costs of imperial maintenance. In such manner, particulars tried to rent a contract of exploration of trade, paying an annual sum to the crown, as in the Portuguese trade. Some of these agents also moved along Simon Ruiz’s network. But others were involved in relations with the imperial crowns on a second way, the finance. Maintaining Empires implied a lot of human, technical but also financial means, and most of the times Kings were forced to recur to these merchants, as we will demonstrate. What were the implications of these collaborative relations in both parts? The main goal of this paper is to comprehend the evolution of informal norms within Simon Ruiz’s network and how they influenced cooperative behavior of the agents, particularly analyzing mechanisms of sanctioning, control, punishment and reward, as well as their consequences in different dimensions: future interactions, social repercussions and in agent’s economic health and activity. The research is based in the bills of exchange and commercial correspondence of the private archive of Simon Ruiz, located in the Provincial Archive of Valladollid, Spain.
Resumo:
This dissertation consists of three papers. The first paper "Managing the Workload: an Experiment on Individual Decision Making and Performance" experimentally investigates how decision-making in workload management affects individual performance. I designed a laboratory experiment in order to exogenously manipulate the schedule of work faced by each subject and to identify its impact on final performance. Through the mouse click-tracking technique, I also collected interesting behavioral measures on organizational skills. I found that a non-negligible share of individuals performs better under externally imposed schedules than in the unconstrained case. However, such constraints are detrimental for those good in self-organizing. The second chapter, "On the allocation of effort with multiple tasks and piecewise monotonic hazard function", tests the optimality of a scheduling model, proposed in a different literature, for the decisional problem faced in the experiment. Under specific assumptions, I find that such model identifies what would be the optimal scheduling of the tasks in the Admission Test. The third paper "The Effects of Scholarships and Tuition Fees Discounts on Students' Performances: Which Monetary Incentives work Better?" explores how different levels of monetary incentives affect the achievement of students in tertiary education. I used a Regression Discontinuity Design to exploit the assignment of different monetary incentives, to study the effects of such liquidity provision on performance outcomes, ceteris paribus. The results show that a monetary increase in the scholarships generates no effect on performance since the achievements of the recipients are all centered near the requirements for non-returning the benefit. Secondly, students, who are actually paying some share of the total cost of college attendance, surprisingly, perform better than those whose cost is completely subsidized. A lower benefit, relatively to a higher aid, it motivates students to finish early and not to suffer the extra cost of a delayed graduation.
Resumo:
This thesis explores the methods based on the free energy principle and active inference for modelling cognition. Active inference is an emerging framework for designing intelligent agents where psychological processes are cast in terms of Bayesian inference. Here, I appeal to it to test the design of a set of cognitive architectures, via simulation. These architectures are defined in terms of generative models where an agent executes a task under the assumption that all cognitive processes aspire to the same objective: the minimization of variational free energy. Chapter 1 introduces the free energy principle and its assumptions about self-organizing systems. Chapter 2 describes how from the mechanics of self-organization can emerge a minimal form of cognition able to achieve autopoiesis. In chapter 3 I present the method of how I formalize generative models for action and perception. The architectures proposed allow providing a more biologically plausible account of more complex cognitive processing that entails deep temporal features. I then present three simulation studies that aim to show different aspects of cognition, their associated behavior and the underlying neural dynamics. In chapter 4, the first study proposes an architecture that represents the visuomotor system for the encoding of actions during action observation, understanding and imitation. In chapter 5, the generative model is extended and is lesioned to simulate brain damage and neuropsychological patterns observed in apraxic patients. In chapter 6, the third study proposes an architecture for cognitive control and the modulation of attention for action selection. At last, I argue how active inference can provide a formal account of information processing in the brain and how the adaptive capabilities of the simulated agents are a mere consequence of the architecture of the generative models. Cognitive processing, then, becomes an emergent property of the minimization of variational free energy.
Resumo:
Due to the several kinds of services that use the Internet and data networks infra-structures, the present networks are characterized by the diversity of types of traffic that have statistical properties as complex temporal correlation and non-gaussian distribution. The networks complex temporal correlation may be characterized by the Short Range Dependence (SRD) and the Long Range Dependence - (LRD). Models as the fGN (Fractional Gaussian Noise) may capture the LRD but not the SRD. This work presents two methods for traffic generation that synthesize approximate realizations of the self-similar fGN with SRD random process. The first one employs the IDWT (Inverse Discrete Wavelet Transform) and the second the IDWPT (Inverse Discrete Wavelet Packet Transform). It has been developed the variance map concept that allows to associate the LRD and SRD behaviors directly to the wavelet transform coefficients. The developed methods are extremely flexible and allow the generation of Gaussian time series with complex statistical behaviors.
Resumo:
For a given self-map f of M, a closed smooth connected and simply-connected manifold of dimension m ≥ 4, we provide an algorithm for estimating the values of the topological invariant Dm r [f], which equals the minimal number of r-periodic points in the smooth homotopy class of f. Our results are based on the combinatorial scheme for computing Dm r [f] introduced by G. Graff and J. Jezierski [J. Fixed Point Theory Appl. 13 (2013), 63–84]. An open-source implementation of the algorithm programmed in C++ is publicly available at http://www.pawelpilarczyk.com/combtop/.
Resumo:
Variations on the standard Kohonen feature map can enable an ordering of the map state space by using only a limited subset of the complete input vector. Also it is possible to employ merely a local adaptation procedure to order the map, rather than having to rely on global variables and objectives. Such variations have been included as part of a hybrid learning system (HLS) which has arisen out of a genetic-based classifier system. In the paper a description of the modified feature map is given, which constitutes the HLSs long term memory, and results in the control of a simple maze running task are presented, thereby demonstrating the value of goal related feedback within the overall network.
Resumo:
The main purpose of this work is to study coincidences of fiber-preserving self-maps over the circle S 1 for spaces which are fiberbundles over S 1 and the fiber is the Klein bottle K. We classify pairs of self-maps over S 1 which can be deformed fiberwise to a coincidence free pair of maps. © 2012 Pushpa Publishing House.
Resumo:
We consider various problems regarding roots and coincidence points for maps into the Klein bottle . The root problem where the target is and the domain is a compact surface with non-positive Euler characteristic is studied. Results similar to those when the target is the torus are obtained. The Wecken property for coincidences from to is established, and we also obtain the following 1-parameter result. Families which are coincidence free but any homotopy between and , , creates a coincidence with . This is done for any pair of maps such that the Nielsen coincidence number is zero. Finally, we exhibit one such family where is the constant map and if we allow for homotopies of , then we can find a coincidence free pair of homotopies.
Resumo:
Mode of access: Internet.
Resumo:
Concept maps are a technique used to obtain a visual representation of a person's ideas about a concept or a set of related concepts. Specifically, in this paper, through a qualitative methodology, we analyze the concept maps proposed by 52 groups of teacher training students in order to find out the characteristics of the maps and the degree of adequacy of the contents with regard to the teaching of human nutrition in the 3rd cycle of primary education. The participants were enrolled in the Teacher Training Degree majoring in Primary Education, and the data collection was carried out through a training activity under the theme of what to teach about Science in Primary School? The results show that the maps are a useful tool for working in teacher education as they allow organizing, synthesizing, and communicating what students know. Moreover, through this work, it has been possible to see that future teachers have acceptable skills for representing the concepts/ideas in a concept map, although the level of adequacy of concepts/ideas about human nutrition and its relations is usually medium or low. These results are a wake-up call for teacher training, both initial and ongoing, because they shows the inability to change priorities as far as the selection of content is concerned.
Resumo:
In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.
Resumo:
The exceptional properties of localised surface plasmons (LSPs), such as local field enhancement and confinement effects, resonant behavior, make them ideal candidates to control the emission of luminescent nanoparticles. In the present work, we investigated the LSP effect on the steady-state and time-resolved emission properties of quantum dots (QDs) by organizing the dots into self-assembled dendrite structures deposited on plasmonic nanostructures. Self-assembled structures consisting of water-soluble CdTe mono-size QDs, were developed on the surface of co-sputtered TiO2 thin films doped with Au nanoparticles (NPs) annealed at different temperatures. Their steady-state fluorescence properties were probed by scanning the spatially resolved emission spectra and the energy transfer processes were investigated by the fluorescence lifetime imaging (FLIM) microscopy. Our results indicate that a resonant coupling between excitons confined in QDs and LSPs in Au NPs located beneath the self-assembled structure indeed takes place and results in (i) a shift of the ground state luminescence towards higher energies and onset of emission from excited states in QDs, and (ii) a decrease of the ground state exciton lifetime (fluorescence quenching).
Resumo:
The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.