873 resultados para Multi-Agent Model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radio frequency identification (RFID) technology has gained increasing popularity in businesses to improve operational efficiency and maximise costs saving. However, there is a gap in the literature exploring the enhanced use of RFID to substantially add values to the supply chain operations, especially beyond what the RFID vendors could offer. This paper presents a multi-agent system, incorporating RFID technology, aimed at fulfilling the gap. The system is developed to model supply chain activities (in particular, logistics operations) and is comprised of autonomous and intelligent agents representing the key entities in the supply chain. With the advanced characteristics of RFID incorporated, the agent system examines ways logistics operations (i.e. distribution network) particular) can be efficiently reconfigured and optimised in response to dynamic changes in the market, production and at any stage in the supply chain. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Postprint

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.

This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.

The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.

Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.

The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examines the business model complexity of Irish credit unions using a latent class approach to measure structural performance over the period 2002 to 2013. The latent class approach allows the endogenous identification of a multi-class framework for business models based on credit union specific characteristics. The analysis finds a three class system to be appropriate with the multi-class model dependent on three financial viability characteristics. This finding is consistent with the deliberations of the Irish Commission on Credit Unions (2012) which identified complexity and diversity in the business models of Irish credit unions and recommended that such complexity and diversity could not be accommodated within a one size fits all regulatory framework. The analysis also highlights that two of the classes are subject to diseconomies of scale. This may suggest credit unions would benefit from a reduction in scale or perhaps that there is an imbalance in the present change process. Finally, relative performance differences are identified for each class in terms of technical efficiency. This suggests that there is an opportunity for credit unions to improve their performance by using within-class best practice or alternatively by switching to another class.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé: L’Institut pour l'étude de la neige et des avalanches en Suisse (SLF) a développé SNOWPACK, un modèle thermodynamique multi-couches de neige permettant de simuler les propriétés géophysiques du manteau neigeux (densité, température, taille de grain, teneur en eau, etc.) à partir desquelles un indice de stabilité est calculé. Il a été démontré qu’un ajustement de la microstructure serait nécessaire pour une implantation au Canada. L'objectif principal de la présente étude est de permettre au modèle SNOWPACK de modéliser de manière plus réaliste la taille de grain de neige et ainsi obtenir une prédiction plus précise de la stabilité du manteau neigeux à l’aide de l’indice basé sur la taille de grain, le Structural Stability Index (SSI). Pour ce faire, l’erreur modélisée (biais) par le modèle a été analysée à l’aide de données précises sur le terrain de la taille de grain à l’aide de l’instrument IRIS (InfraRed Integrated Sphere). Les données ont été recueillies durant l’hiver 2014 à deux sites différents au Canada : parc National des Glaciers, en Colombie-Britannique ainsi qu’au parc National de Jasper. Le site de Fidelity était généralement soumis à un métamorphisme à l'équilibre tandis que celui de Jasper à un métamorphisme cinétique plus prononcé. Sur chacun des sites, la stratigraphie des profils de densités ainsi des profils de taille de grain (IRIS) ont été complétés. Les profils de Fidelity ont été complétés avec des mesures de micropénétromètre (SMP). L’analyse des profils de densité a démontré une bonne concordance avec les densités modélisées (R[indice supérieur 2]=0.76) et donc la résistance simulée pour le SSI a été jugée adéquate. Les couches d’instabilités prédites par SNOWPACK ont été identifiées à l’aide de la variation de la résistance dans les mesures de SMP. L’analyse de la taille de grain optique a révélé une surestimation systématique du modèle ce qui est en accord avec la littérature. L’erreur de taille de grain optique dans un environnement à l’équilibre était assez constante tandis que l’erreur en milieux cinétique était plus variable. Finalement, une approche orientée sur le type de climat représenterait le meilleur moyen pour effectuer une correction de la taille de grain pour une évaluation de la stabilité au Canada.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As emoções são consideradas a regra central de nossas vidas, tendo grande impacto na tomada de decisões, ações, memória, atenção, etc. Sendo assim, existe grande interesse em simulá-las em ambientes computacionais, possibilitando que situações do cotidiano humano possam ser estudadas em ambientes controlados. Embora existam modelos teóricos para o funcionamento de emoções, estes por si só são insuficientes para uma simulação precisa em meios computacionais. Tendo como base um destes modelos, o modelo OCC, essa dissertação propõe a simulação de emoções em ambientes mutiagentes através da criação de uma rede Bayesiana capaz de traduzir estímulos gerados neste ambiente em emoções. A utilização de redes Bayesianas combinadas à estrutura do modelo OCC busca a adição de imprevisibilidade ao modelo, além de fornecê-lo uma estrutura computacional. A aplicação do modelo proposto a um sistema multiagentes proporciona o estudo da influência das emoções sobre as ações e comportamento dos agentes, possibilitando um estudo de comparação entre os resultados obtidos ao se realizar uma simulação multiagentes clássica e uma simulação multiagentes contendo emoções. De forma a validar e avaliar seu funcionamento, é apresentado o estudo da aplicação da rede Bayesiana de emoções sobre um modelo multiagentes exemplo, observando as variações que as emoções provocam sobre o comportamento dos agentes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Interações sociais são frequentemente descritas como trocas sociais. Na literatura, trocas sociais em Sistemas Multiagentes são objeto de estudo em diversos contextos, nos quais as relações sociais são interpretadas como trocas sociais. Dentre os problemas estudados, um problema fundamental discutido na literatura e a regulação¸ ao de trocas sociais, por exemplo, a emergência de trocas equilibradas ao longo do tempo levando ao equilíbrio social e/ou comportamento de equilíbrio/justiça. Em particular, o problema da regulação de trocas sociais e difícil quando os agentes tem informação incompleta sobre as estratégias de troca dos outros agentes, especificamente se os agentes tem diferentes estratégias de troca. Esta dissertação de mestrado propõe uma abordagem para a autorregulacao de trocas sociais em sistemas multiagentes, baseada na Teoria dos Jogos. Propõe o modelo de Jogo de Autorregulacão ao de Processos de Trocas Sociais (JAPTS), em uma versão evolutiva e espacial, onde os agentes organizados em uma rede complexa, podem evoluir suas diferentes estratégias de troca social. As estratégias de troca são definidas através dos parâmetros de uma função de fitness. Analisa-se a possibilidade do surgimento do comportamento de equilíbrio quando os agentes, tentando maximizar sua adaptação através da função de fitness, procuram aumentar o numero de interações bem sucedidas. Considera-se um jogo de informação incompleta, uma vez que os agentes não tem informações sobre as estratégias de outros agentes. Para o processo de aprendizado de estratégias, utiliza-se um algoritmo evolutivo, no qual os agentes visando maximizar a sua função de fitness, atuam como autorregulares dos processos de trocas possibilitadas pelo jogo, contribuindo para o aumento do numero de interações bem sucedidas. São analisados 5 diferentes casos de composição da sociedade. Para alguns casos, analisa-se também um segundo tipo de cenário, onde a topologia de rede é modificada, representando algum tipo de mobilidade, a fim de analisar se os resultados são dependentes da vizinhança. Alem disso, um terceiro cenário é estudado, no qual é se determinada uma política de influencia, quando as medias dos parâmetros que definem as estratégias adotadas pelos agentes tornam-se publicas em alguns momentos da simulação, e os agentes que adotam a mesma estratégia de troca, influenciados por isso, imitam esses valores. O modelo foi implementado em NetLogo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O problema de planejamento de rotas de robôs móveis consiste em determinar a melhor rota para um robô, em um ambiente estático e/ou dinâmico, que seja capaz de deslocá-lo de um ponto inicial até e um ponto final, também em conhecido como estado objetivo. O presente trabalho emprega o uso de uma abordagem baseada em Algoritmos Genéticos para o planejamento de rotas de múltiplos robôs em um ambiente complexo composto por obstáculos fixos e obstáculos moveis. Através da implementação do modelo no software do NetLogo, uma ferramenta utilizada em simulações de aplicações multiagentes, possibilitou-se a modelagem de robôs e obstáculos presentes no ambiente como agentes interativos, viabilizando assim o desenvolvimento de processos de detecção e desvio de obstáculos. A abordagem empregada busca pela melhor rota para robôs e apresenta um modelo composto pelos operadores básicos de reprodução e mutação, acrescido de um novo operador duplo de refinamento capaz de aperfeiçoar as melhores soluções encontradas através da eliminação de movimentos inúteis. Além disso, o calculo da rota de cada robô adota um método de geração de subtrechos, ou seja, não calcula apenas uma unica rota que conecta os pontos inicial e final do cenário, mas sim várias pequenas subrotas que conectadas formam um caminho único capaz de levar o robô ao estado objetivo. Neste trabalho foram desenvolvidos dois cenários, para avaliação da sua escalabilidade: o primeiro consiste em um cenário simples composto apenas por um robô, um obstáculo movel e alguns obstáculos fixos; já o segundo, apresenta um cenário mais robusto, mais amplo, composto por múltiplos robôs e diversos obstáculos fixos e moveis. Ao final, testes de desempenho comparativos foram efetuados entre a abordagem baseada em Algoritmos Genéticos e o Algoritmo A*. Como critério de comparação foi utilizado o tamanho das rotas obtidas nas vinte simulações executadas em cada abordagem. A analise dos resultados foi especificada através do Teste t de Student.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The majority of research work carried out in the field of Operations-Research uses methods and algorithms to optimize the pick-up and delivery problem. Most studies aim to solve the vehicle routing problem, to accommodate optimum delivery orders, vehicles etc. This paper focuses on green logistics approach, where existing Public Transport infrastructure capability of a city is used for the delivery of small and medium sized packaged goods thus, helping improve the situation of urban congestion and greenhouse gas emissions reduction. It carried out a study to investigate the feasibility of the proposed multi-agent based simulation model, for efficiency of cost, time and energy consumption. Multimodal Dijkstra Shortest Path algorithm and Nested Monte Carlo Search have been employed for a two-phase algorithmic approach used for generation of time based cost matrix. The quality of the tour is dependent on the efficiency of the search algorithm implemented for plan generation and route planning. The results reveal a definite advantage of using Public Transportation over existing delivery approaches in terms of energy efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.