935 resultados para hierarchical winner-take-all
Resumo:
This thesis addresses the problem of learning in physical heterogeneous multi-agent systems (MAS) and the analysis of the benefits of using heterogeneous MAS with respect to homogeneous ones. An algorithm is developed for this task; building on a previous work on stability in distributed systems by Tad Hogg and Bernardo Huberman, and combining two phenomena observed in natural systems, task partition and hierarchical dominance. This algorithm is devised for allowing agents to learn which are the best tasks to perform on the basis of each agent's skills and the contribution to the team global performance. Agents learn by interacting with the environment and other teammates, and get rewards from the result of the actions they perform. This algorithm is specially designed for problems where all robots have to co-operate and work simultaneously towards the same goal. One example of such a problem is role distribution in a team of heterogeneous robots that form a soccer team, where all members take decisions and co-operate simultaneously. Soccer offers the possibility of conducting research in MAS, where co-operation plays a very important role in a dynamical and changing environment. For these reasons and the experience of the University of Girona in this domain, soccer has been selected as the test-bed for this research. In the case of soccer, tasks are grouped by means of roles. One of the most interesting features of this algorithm is that it endows MAS with a high adaptability to changes in the environment. It allows the team to perform their tasks, while adapting to the environment. This is studied in several cases, for changes in the environment and in the robot's body. Other features are also analysed, especially a parameter that defines the fitness (biological concept) of each agent in the system, which contributes to performance and team adaptability. The algorithm is applied later to allow agents to learn in teams of homogeneous and heterogeneous robots which roles they have to select, in order to maximise team performance. The teams are compared and the performance is evaluated in the games against three hand-coded teams and against the different homogeneous and heterogeneous teams built in this thesis. This section focuses on the analysis of performance and task partition, in order to study the benefits of heterogeneity in physical MAS. In order to study heterogeneity from a rigorous point of view, a diversity measure is developed building on the hierarchic social entropy defined by Tucker Balch. This is adapted to quantify physical diversity in robot teams. This tool presents very interesting features, as it can be used in the future to design heterogeneous teams on the basis of the knowledge on other teams.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.
Resumo:
The Back to the Future Trilogy incorporates several different generic elements, including aspects of the fifties teen movie, science fiction, comedy and the western. These different modes playfully intertwine with each other creating a complex world of repetitions, echoes and modulations. This essay seeks to interrogate the construction of generic elements and the play between them through a close analysis of a repeated performance. Genre is signalled through various strategies employed within the construction of mise-en-scène, a significant portion of this, as I would like to argue, is transmitted through performance. The material detail of a performance – incorporating gesture, movement, voice, and even surrounding elements such as costume – as well as the way it its presented within a film is key to the establishment, invocation and coherence of genre. Furthermore, attention to the complexity of performance details, particularly in the manner in which they reverberate across texts, demonstrates the intricacy of genre and its inherent mutability. The Back to the Future trilogy represents a specific interest in the flexibility of genre. Within each film, and especially across all three, aspects of various genres are interlaced through both visual and narrative detail, thus constructing a dense layer of references both within and without the texts. To explore this patterning in more detail I will interrogate the contribution of performance to generic play through close analysis of Thomas F. Wilson’s performance of Biff/Griff/Burford Tannen and his central encounter with Marty McFly (Michael J. Fox) in each film. These moments take place in a fifties diner, a 1980s retro diner and a saloon respectively, each space contributing the similarities and differences in each repetition. Close attention to Wilson’s performance of each related character, which contains both modulations and repetitions used specifically to place each film’s central generic theme, demonstrates how embedded the play between genres and their flexibility is within the trilogy.
Resumo:
Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.
Optical Properties and Charge-Transfer Excitations in Edge-Functionalized All-Graphene Nanojunctions
Resumo:
We investigate the optical properties of edge-fiinctionalized graphene nanosystems, focusing on the formation of junctions and charge-transfer excitons. We consider a class of graphene structures that combine the main electronic features of graphene with the wide tunability of large polycyclic aromatic hydrocarbons. By investigating prototypical ribbon-like systems, we show that, upon convenient choice of functional groups, low-energy excitations with remarkable charge-transfer character and large oscillator strength are obtained. These properties can be further modulated through an appropriate width variation, thus spanning a wide range in the low-energy region of the UV-vis spectra. Our results are relevant in view of designing all-graphene optoelectronic nanodevices, which take advantage of the versatility of molecular functionalization, together with the stability and the electronic properties of graphene nanostructures.
Resumo:
Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.
Resumo:
As peculiaridades da atividade bancária - normalmente vista como fundamental à persecução do desenvolvimento, bem como bastante influenciada pelo direito - estimularam a emergência de um regime internacional de regulação da categoria. Tal advento se deu na esteira dos trabalhos realizados por organizações internacionais, como o Comitê da Basileia (BCBS) e o Comitê de Estabilidade Financeira (FSB), e em virtude da percepção de estarmos em um mundo no qual os mercados estão muito interligados, mas permanecem nacionalmente regulados. À parte da discussão do mérito e efetividade dos padrões regulatórios propostos por essas organizações, em um contexto no qual uma série de países busca implementá-los, interessa ao presente trabalho perscrutar os elementos que definem o grau adequado de discricionariedade de implementação conferida na formulação desses. A análise de tal problema sugere a existência de dois extremos a se evitar: a arbitragem regulatória e o one size fits all. Evitar a arbitragem regulatória é uma preocupação da literatura de regulação bancária que se traduz em conter uma variação muito acentuada entre os regimes regulatórios de diferentes jurisdições. Isso enseja três vetores favoráveis a um menor grau de discricionariedade, representado por desígnios de maior coordenação, maior competitividade e de evitar uma race to the bottom regulatória entre os países. Já evitar o one size fits all é uma preocupação recorrente da literatura de direito e desenvolvimento que sugere a necessidade de se atentar para as peculiaridades locais na formulação de políticas regulatórias. Por sua vez, isso enseja outros três vetores, dessa vez em direção a um maior grau de discricionariedade. Sendo esses representados por preocupações com a eficiência das medidas adotadas, com a garantia de um espaço de manobra que respeite a autodeterminação dos países - ao menos minorando eventuais déficits democráticos da estipulação de padrões internacionais - e com a viabilidade prática do experimentalismo. A fim de analisar esse problema e levando em conta esses extremos, propõe-se uma estratégia bipartida: a construção de um enquadramento teórico e a verificação de uma hipótese de pesquisa, segundo a qual um caso específico de regulação bancária pode demonstrar como esses elementos interagem na definição do grau de discricionariedade. Assim, em um primeiro momento - após a necessária contextualização e descrição metodológica - é construído um framework teórico do problema à luz da literatura da regulação bancária e do instrumental utilizado pelas discussões acerca do impacto do direito no desenvolvimento. Discussões essas que há anos têm abordado a formulação de padrões internacionais e a sua implementação em contextos nacionais diversos. Também nesse primeiro momento e como parte da construção dos alicerces teóricos, procede-se a um excurso que busca verificar a hipótese da confiança no sistema bancário ser uma espécie de baldio (common), bem como suas possíveis consequências. Partindo desse enquadramento, elege-se o segmento de regulação bancária relativo aos garantidores de depósito para uma análise de caso. Tal análise - realizada com subsídios provenientes de pesquisa bibliográfica e empírica - busca demonstrar com que grau de discricionariedade e de que forma se deu a formulação e implementação de padrões internacionais nesse segmento. Ao fim, analisa-se como os vetores determinantes do grau de discricionariedade interagem no caso dos garantidores de depósitos, bem como as sugestões possivelmente inferíveis dessa verificação para os demais segmentos da regulação bancária.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A number of attempts have been made to obtain a clear definition of biological stress. However, in spite of the efforts, some controversies on the concept of plant stress remain. The current versions are centered either on the cause (stress factor) or on the effect (stress response) of environmental stress. The objective of this study was to contribute to the definition of stress, using a hierarchical approach. Thus, we have performed an analysis of the most usual stress concepts and tested the relevance of considering different observation scales in a study on plant response to water deficit. Seedlings of Eucalyptus grandis were grown in vitro at water potentials ranging from -0.16 to -0.6 MPa, and evaluated according to growth and biochemical parameters. Data were analyzed through principal component analysis (PCA), which pointed to a hierarchical organization in plant responses to environmental disturbances. Growth parameters (height and dry weight) are more sensitive to water deficit than biochemical ones (sugars, proline, and protein), suggesting that higher hierarchical levels were more sensitive to environmental constraints than lower hierarchical ones. We suggest that before considering an environmental fluctuation as stressful, it is necessary to take into account different levels of plant response, and that the evaluation of the effects of environmental disturbances on an organism depends on the observation scale being used. Hence, a more appropriate stress concept should consider the hierarchical organization of the biological systems, not only for a more adequate theoretical approach, but also for the improvement of practical studies on plants under stress.
Resumo:
We consider the management branch model where the random resources of the subsystem are given by the exponential distributions. The determinate equivalent is a block structure problem of quadratic programming. It is solved effectively by means of the decomposition method, which is based on iterative aggregation. The aggregation problem of the upper level is resolved analytically. This overcomes all difficulties concerning the large dimension of the main problem.