825 resultados para Network analysis (Planning)
Resumo:
The current state of the art in the planning and coordination of autonomous vehicles is based upon the presence of speed lanes. In a traffic scenario where there is a large diversity between vehicles the removal of speed lanes can generate a significantly higher traffic bandwidth. Vehicle navigation in such unorganized traffic is considered. An evolutionary based trajectory planning technique has the advantages of making driving efficient and safe, however it also has to surpass the hurdle of computational cost. In this paper, we propose a real time genetic algorithm with Bezier curves for trajectory planning. The main contribution is the integration of vehicle following and overtaking behaviour for general traffic as heuristics for the coordination between vehicles. The resultant coordination strategy is fast and near-optimal. As the vehicles move, uncertainties may arise which are constantly adapted to, and may even lead to either the cancellation of an overtaking procedure or the initiation of one. Higher level planning is performed by Dijkstra's algorithm which indicates the route to be followed by the vehicle in a road network. Re-planning is carried out when a road blockage or obstacle is detected. Experimental results confirm the success of the algorithm subject to optimal high and low-level planning, re-planning and overtaking.
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
Resumo:
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.
Resumo:
The Team Formation problem (TFP) has become a well-known problem in the OR literature over the last few years. In this problem, the allocation of multiple individuals that match a required set of skills as a group must be chosen to maximise one or several social positive attributes. Speci�cally, the aim of the current research is two-fold. First, two new dimensions of the TFP are added by considering multiple projects and fractions of people's dedication. This new problem is named the Multiple Team Formation Problem (MTFP). Second, an optimization model consisting in a quadratic objective function, linear constraints and integer variables is proposed for the problem. The optimization model is solved by three algorithms: a Constraint Programming approach provided by a commercial solver, a Local Search heuristic and a Variable Neighbourhood Search metaheuristic. These three algorithms constitute the first attempt to solve the MTFP, being a variable neighbourhood local search metaheuristic the most effi�cient in almost all cases. Applications of this problem commonly appear in real-life situations, particularly with the current and ongoing development of social network analysis. Therefore, this work opens multiple paths for future research.
Resumo:
Estimating the sizes of hard-to-count populations is a challenging and important problem that occurs frequently in social science, public health, and public policy. This problem is particularly pressing in HIV/AIDS research because estimates of the sizes of the most at-risk populations-illicit drug users, men who have sex with men, and sex workers-are needed for designing, evaluating, and funding programs to curb the spread of the disease. A promising new approach in this area is the network scale-up method, which uses information about the personal networks of respondents to make population size estimates. However, if the target population has low social visibility, as is likely to be the case in HIV/AIDS research, scale-up estimates will be too low. In this paper we develop a game-like activity that we call the game of contacts in order to estimate the social visibility of groups, and report results from a study of heavy drug users in Curitiba, Brazil (n = 294). The game produced estimates of social visibility that were consistent with qualitative expectations but of surprising magnitude. Further, a number of checks suggest that the data are high-quality. While motivated by the specific problem of population size estimation, our method could be used by researchers more broadly and adds to long-standing efforts to combine the richness of social network analysis with the power and scale of sample surveys. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico.
Resumo:
A estratégia empresarial é uma disciplina jovem. Comparado com os campos de estudo de economia e sociologia o campo de estratégia empresarial pode ser visto como um fenômeno de formação mais recente, embora extremamente dinâmico em sua capacidade de criar abordagens teóricas diferenciadas. Este trabalho discute a recente proliferação de teorias em estratégia empresarial, propondo um modelo de classificação destas teorias baseado na análise empírica do modelo de escolas de pensamento em estratégia empresarial desenvolvido por Mintzberg, Ahlstrand e Lampel em seu livro Safári de Estratégia (1998). As possíveis conseqüências relativas à interação entre teoria e prática são também discutidas apresentando o que definimos como a síndrome do ornitorrinco.
Resumo:
Sistemas de recomendação baseados cooperação indireta podem ser implementados em bibliotecas por meio da aplicação de conceitos e procedimentos de análise de redes. Uma medida de distância temática, inicialmente desenvolvida para variáveis dicotômicas, foi generalizada e aplicada a matrizes de co-ocorrências, permitindo o aproveitando de toda a informação disponível sobre o comportamento dos usuários com relação aos itens consultados. Como resultado formaram-se subgrupos especializados altamente coerentes, para os quais listas-base e listas personalizadas foram geradas da maneira usual. Aplicativos programáveis capazes de manipularem matrizes, como o software S-plus, foram utilizados para os cálculos (com vantagens sobre o software especializado UCINET 5.0), sendo suficientes para o processamento de grupos temáticos de até 10.000 usuários.
Resumo:
Last week I sat down with a Brazilian acquaintance who was shaking his head over the state of national politics. A graduate of a military high school, he'd been getting e-mails from former classmates, many of them now retired army officers, who were irate over the recent presidential elections. "We need to kick these no-good Petistas out of office," one bristled, using the derogatory shorthand for members of the ruling Workers Party, or PT in Portuguese.
Resumo:
What you see above is a graphic representation of something anyone who followed the campaign that led to the re-election of Dilma Rousseff as Brazil’s president on October 26 already knows: the election was the most polarised in the country’s history. Brasil was split down the middle, not only numerically (Dilma got 52 per cent, Aécio Neves 48) and geographically (Dilma won in the less developed north, Aécio in the more prosperous south). The twitterspere, too, was divided into two camps. Not only that; they hardly talked to each other at all.
Resumo:
Utilizando a base de dados dos trabalhos publicados nos Enanpads realizados em 2002-04, este artigo focaliza a área de administração da informação para realizar um estudo exploratório sobre a formação de padrões nas estruturas de disseminação do conhecimento acadêmico no Brasil, apoiado nos conceitos oriundos da análise das redes sociais e conjugados com a base proveniente da teoria dos grafos e em recursos computacionais. Procurou-se mapear os fluxos de informações que possibilitam as trocas de conhecimento através das ligações existentes no meio acadêmico. Os resultados indicam a necessidade de ampliar e estreitar os laços entre os autores, notadamente os que possuem algum grau de centralidade local, no intuito de obter o fortalecimento das instituições de ensino, de forma a quebrar as resistências à produção conjunta entre elas, em detrimento do padrão de reprodução endógena detectado.
Resumo:
Everything before the election seemed to be pointing to a Labour lead. Even pollsters got it wrong. But a network analysis of the Twitter conversations about the general election highlights just how much hype there was around Labour in the run-up to the big day. Marco Ruediger and this colleagues at the department of public policy analysis at the Fundação Getulio Vargas in Rio de Janeiro analysed and visualised millions of tweets during the campaign.