797 resultados para agent-based model
Resumo:
Purpose: Short product life cycle and/or mass customization necessitate reconfiguration of operational enablers of supply chain (SC) from time to time in order to harness high levels of performance. The purpose of this paper is to identify the key operational enablers under stochastic environment on which practitioner should focus while reconfiguring a SC network. Design/methodology/approach: The paper used interpretive structural modeling (ISM) approach that presents a hierarchy-based model and the mutual relationships among the enablers. The contextual relationship needed for developing structural self-interaction matrix (SSIM) among various enablers is realized by conducting experiments through simulation of a hypothetical SC network. Findings: The research identifies various operational enablers having a high driving power towards assumed performance measures. In this regard, these enablers require maximum attention and of strategic importance while reconfiguring SC. Practical implications: ISM provides a useful tool to the SC managers to strategically adopt and focus on the key enablers which have comparatively greater potential in enhancing the SC performance under given operational settings. Originality/value: The present research realizes the importance of SC flexibility under the premise of reconfiguration of the operational units in order to harness high value of SC performance. Given the resulting digraph through ISM, the decision maker can focus the key enablers for effective reconfiguration. The study is one of the first efforts that develop contextual relations among operational enablers for SSIM matrix through integration of discrete event simulation to ISM. © Emerald Group Publishing Limited.
Resumo:
Large-scale evacuations are a recurring theme on news channels, whether in response to major natural or manmade disasters. The role of warning dissemination is a key part in the success of such large-scale evacuations and its inadequacy in certain cases has been a 'primary contribution to deaths and injuries' (Hayden et al.; 2007). Along with technology-driven 'official warning channels' (e.g. sirens, mass media), the role of unofficial channel (e.g. neighbours, personal contacts, volunteer wardens) has proven to be significant in warning the public of the need to evacuate. Although post-evacuation studies identify the behaviours of evacuees as disseminators of the warning message, there has not been a detailed study that quantifies the effects of such behaviour on the warning message dissemination. This paper develops an Agent-Based Simulation (ABS) model of multiple agents (evacuee households) in a hypothetical community to investigate the impact of behaviour as an unofficial channel on the overall warning dissemination. Parameters studied include the percentage of people who warn their neighbours, the efficiency of different official warning channels, and delay time to warn neighbours. Even with a low proportion of people willing to warn their neighbour, the results showed considerable impact on the overall warning dissemination. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Timely warning of the public during large scale emergencies is essential to ensure safety and save lives. This ongoing study proposes an agent-based simulation model to simulate the warning message dissemination among the public considering both official channels and unofficial channels The proposed model was developed in NetLogo software for a hypothetical area, and requires input parameters such as effectiveness of each official source (%), estimated time to begin informing others, estimated time to inform others and estimated percentage of people (who do not relay the message). This paper demonstrates a means of factoring the behaviour of the public as informants into estimating the effectiveness of warningdissemination during large scale emergencies. The model provides a tool for the practitioner to test the potential impact of the informal channels on the overall warning time and sensitivity of the modelling parameters. The tool would help the practitioners to persuade evacuees to disseminate the warning message informing others similar to the ’Run to thy neighbour campaign conducted by the Red cross.
Resumo:
The main idea of our approach is that the domain ontology is not only the instrument of learning but an object of examining student skills. We propose for students to build the domain ontology of examine discipline and then compare it with etalon one. Analysis of student mistakes allows to propose them personalized recommendations and to improve the course materials in general. For knowledge interoperability we apply Semantic Web technologies. Application of agent-based technologies in e-learning provides the personification of students and tutors and saved all users from the routine operations.
Resumo:
Linguistic theory, cognitive, information, and mathematical modeling are all useful while we attempt to achieve a better understanding of the Language Faculty (LF). This cross-disciplinary approach will eventually lead to the identification of the key principles applicable in the systems of Natural Language Processing. The present work concentrates on the syntax-semantics interface. We start from recursive definitions and application of optimization principles, and gradually develop a formal model of syntactic operations. The result – a Fibonacci- like syntactic tree – is in fact an argument-based variant of the natural language syntax. This representation (argument-centered model, ACM) is derived by a recursive calculus that generates a mode which connects arguments and expresses relations between them. The reiterative operation assigns primary role to entities as the key components of syntactic structure. We provide experimental evidence in support of the argument-based model. We also show that mental computation of syntax is influenced by the inter-conceptual relations between the images of entities in a semantic space.
Resumo:
Last mile relief distribution is the final stage of humanitarian logistics. It refers to the supply of relief items from local distribution centers to the disaster affected people (Balcik et al., 2008). In the last mile relief distribution literature, researchers have focused on the use of optimisation techniques for determining the exact optimal solution (Liberatore et al., 2014), but there is a need to include behavioural factors with those optimisation techniques in order to obtain better predictive results. This paper will explain how improving the coordination factor increases the effectiveness of the last mile relief distribution process. There are two stages of methodology used to achieve the goal: Interviews: The authors conducted interviews with the Indian Government and with South Asian NGOs to identify the critical factors for final relief distribution. After thematic and content analysis of the interviews and the reports, the authors found some behavioural factors which affect the final relief distribution. Model building: Last mile relief distribution in India follows a specific framework described in the Indian Government disaster management handbook. We modelled this framework using agent based simulation and investigated the impact of coordination on effectiveness. We define effectiveness as the speed and accuracy with which aid is delivered to affected people. We tested through simulation modelling whether coordination improves effectiveness.
Resumo:
One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.
Resumo:
As more of the economy moves from traditional manufacturing to the service sector, the nature of work is becoming less tangible and thus, the representation of human behaviour in models is becoming more important. Representing human behaviour and decision making in models is challenging, both in terms of capturing the essence of the processes, and also the way that those behaviours and decisions are or can be represented in the models themselves. In order to advance understanding in this area, a useful first step is to evaluate and start to classify the various types of behaviour and decision making that are required to be modelled. This talk will attempt to set out and provide an initial classification of the different types of behaviour and decision making that a modeller might want to represent in a model. Then, it will be useful to start to assess the main methods of simulation in terms of their capability in representing these various aspects. The three main simulation methods, System Dynamics, Agent Based Modelling and Discrete Event Simulation all achieve this to varying degrees. There is some evidence that all three methods can, within limits, represent the key aspects of the system being modelled. The three simulation approaches are then assessed for their suitability in modelling these various aspects. Illustration of behavioural modelling will be provided from cases in supply chain management, evacuation modelling and rail disruption.
Resumo:
Random distributed feedback (DFB) fiber lasers have attracted a great attention since first demonstration [1]. Despite big advance in practical laser systems, random DFB fiber laser spectral properties are far away to be understood or even numerically modelled. Up to date, only generation power could be calculated and optimized numerically [1,2] or analytically [3] within the power balance model. However, spectral and statistical properties of random DFB fiber laser can not be found in this way. Here we present first numerical modelling of the random DFB fiber laser, including its spectral and statistical properties, using NLSE-based model. © 2013 IEEE.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
This study explores the ongoing pedagogical development of a number of undergraduate design and engineering programmes in the United Kingdom. Observations and data have been collected over several cohorts to bring a valuable perspective to the approaches piloted across two similar university departments while trialling a number of innovative learning strategies. In addition to the concurrent institutional studies the work explores curriculum design that applies the principles of Co-Design, multidisciplinary and trans disciplinary learning, with both engineering and product design students working alongside each other through a practical problem solving learning approach known as the CDIO learning initiative (Conceive, Design Implement and Operate) [1]. The study builds on previous work presented at the 2010 EPDE conference: The Effect of Personality on the Design Team: Lessons from Industry for Design Education [2]. The subsequent work presented in this paper applies the findings to mixed design and engineering team based learning, building on the insight gained through a number of industrial process case studies carried out in current design practice. Developments in delivery also aligning the CDIO principles of learning through doing into a practice based, collaborative learning experience and include elements of the TRIZ creative problem solving technique [3]. The paper will outline case studies involving a number of mixed engineering and design student projects that highlight the CDIO principles, combined with an external industrial design brief. It will compare and contrast the learning experience with that of a KTP derived student project, to examine an industry based model for student projects. In addition key areas of best practice will be presented, and student work from each mode will be discussed at the conference.
Resumo:
Competition between Higher Education Institutions is increasing at an alarming rate, while changes of the surrounding environment and demands of labour market are frequent and substantial. Universities must meet the requirements of both the national and European legislation environment. The Bologna Declaration aims at providing guidelines and solutions for these problems and challenges of European Higher Education. One of its main goals is the introduction of a common framework of transparent and comparable degrees that ensures the recognition of knowledge and qualifications of citizens all across the European Union. This paper will discuss a knowledge management approach that highlights the importance of such knowledge representation tools as ontologies. The discussed ontology-based model supports the creation of transparent curricula content (Educational Ontology) and the promotion of reliable knowledge testing (Adaptive Knowledge Testing System).
Resumo:
Az adócsalásnak egy olyan modellcsaládját vizsgáljuk, ahol az egykulcsos adó kizárólag a közjavakat finanszírozza. Két megközelítés összehasonlítására összpontosítunk. Az elsőben minden dolgozó jövedelme azonos, és ebből minden évben annyit vall be, amennyi maximalizálja a nála maradó jövedelemből fedezhető fogyasztás nyújtotta hasznosság és a jövedelembevallásból fakadó hasznosság összegét. A második hasznosság három tényező szorzata: a dolgozó exogén adómorálja, a környezetében előző évben megfigyelt átlagos jövedelembevallás és saját bevallásából fakadó endogén hasznossága. A második megközelítésben az ágensek egyszerű heurisztikus szabályok szerint cselekszenek. Míg az optimalizáló modellben hagyományos Laffer-görbékkel találkozunk, addig a heurisztikán alapuló modellekben (lineárisan) növekvő Laffer-görbék jönnek létre. E különbség oka, hogy a heurisztikán alapuló modellben egy sajátos viselkedésfajta jelentkezik: számos ágens ingatag helyzetbe kerül, amelyben altruizmus és önzés között ingadozik. ________ The authors study a family of models of tax evasion, where a flat-rate tax only finances the provision of public goods and audits and wage differences are ne-glected. The paper focuses on comparing two modelling approaches. The first is based on optimizing agents, endowed with social preferences, their utility being the sum of private consumption and moral utility. The second approach involves agents acting according to simple heuristics. While the traditionally shaped Laffer curves are encountered in the optimizing model, the heuristics models exhibit (linearly) increasing Laffer curves. This difference is related to a peculiar type of behaviour: within the agent-based approach lurk a number of agents in a moral state of limbo, alternating between altruism and selfishness.
Resumo:
A tanulmány első részében a megbízó-megbízott-kliens modellt fogalmi keretként alkalmazva a korrupció négy ideáltípusát mutatjuk be: míg a vesztegetést és zsarolást a megbízott és kliens közti, addig a hűtlen kezelést és csalást a megbízó és megbízott közti tranzakcióként definiáljuk. A korrupció ezen alaptípusait irányított gráfok segítségével ábrázoljuk. Ezt követően a korrupciós ügyletek szereplőinek lehetséges (pl. a tranzakciós költségek és a lebukási kockázatok csökkentésére irányuló) motivációit vizsgáljuk, vagyis azt, hogy mely tényezők ösztönzik leginkább a korrupciós helyzetek szereplőit arra, hogy tranzakcióikat különböző típusú személyes, üzleti, politikai és egyéb intézményes kapcsolathálókba ágyazzák. A második részben – támaszkodva korábbi kutatásaink eredményeire – néhány tipikus magyarországi korrupciós tranzakció társadalmi és intézményi beágyazottságát mutatjuk be. Négy esettanulmányt elemzünk részletesen, a bemutatott tipikus (pl. pártfinanszírozáshoz, vagy engedélyek megszerzéséhez kapcsolódó) korrupciós hálózatokat pedig többszereplős, bonyolult és multiplex gráfokkal ábrázoljuk. Végül a komplex hálózatok evolúciós vonatkozásait a szereplők számának, a kapcsolatok komplexitásának, valamint a személyi és/vagy intézményi beágyazottság mértékének tükrében vizsgáljuk. ______ In the first part of the paper four idealtypical corruption transactions are explicated in terms of the principal-agent-client model: bribery and extortion are described as two different types of agent-client relationship, while embezzlement and fraud as two different types of principal-agent relationship. The main idea is to describe these elementary corruption transactions as simple directed graphs. The next section of the paper takes into consideration different kinds of possible motivations (such as the reduction of risks or transaction costs) of the principals, agents and clients, in order to embed their corruption transactions in various kinds of personal, business, political and other institutional networks. In the second part of the paper some typical and stable network configurations are presented, based on a recent empirical corruption research carried out in Hungary. Certain corruption cases (such as party financing or granting of permit) are analyzed in details, and are described as complex and multiple networks. The paper concludes in showing some signs of the evolution of corruption networks in Hungary in terms of the number of actors, of the complexity of network configurations, of the level of personal or institutional embeddedness, and of the multiplexity of relationships.
Resumo:
The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^