961 resultados para Agent Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main idea of our approach is that the domain ontology is not only the instrument of learning but an object of examining student skills. We propose for students to build the domain ontology of examine discipline and then compare it with etalon one. Analysis of student mistakes allows to propose them personalized recommendations and to improve the course materials in general. For knowledge interoperability we apply Semantic Web technologies. Application of agent-based technologies in e-learning provides the personification of students and tutors and saved all users from the routine operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of the description of interaction between spatially divided agents in the form of dialogues is explored. The concept of processes synchronization is analyzed to formalize the specification of interaction at the level of events constituting the processes. The approach to formalization of the description of conditions of synchronization when both the independent behavior and the communications of agents can be presented at a logic level is offered. It is shown, that the collective behavior of agents can be specified by the synthetic temporal logic that unites linear and branching time temporal logics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach of building distributed decision support systems is proposed. There is defined a framework of a distributed DSS and examined questions of problem formulation and solving using artificial intellectual agents in system core.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper is described a didactic methodology combining current e-learning methods and the support of Intelligent Agents technologies. The aim is to favor the synthesis among theoretical approach and based practical approach using the so-called Intelligent Agent, software that exploits the Artificial Intelligence and that operates as tutor, facilitating the consumers in the training operations. The paper illustrates how such new Intelligent Agent algorithm (IA) is used in the training of employees working in the transportation sector, thanks to the experience gained with the PARMENIDE project - Promoting Advanced Resources and Methodologies for New Teaching and Learning Solutions in Digital Education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60G48, 60G20, 60G15, 60G17. JEL Classification: G10

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been an increasing interest in the use of agent-based simulation and some discussion of the relative merits of this approach as compared to discrete-event simulation. There are differing views on whether an agent-based simulation offers capabilities that discrete-event cannot provide or whether all agent-based applications can at least in theory be undertaken using a discrete-event approach. This paper presents a simple agent-based NetLogo model and corresponding discrete-event versions implemented in the widely used ARENA software. The two versions of the discrete-event model presented use a traditional process flow approach normally adopted in discrete-event simulation software and also an agent-based approach to the model build. In addition a real-time spatial visual display facility is provided using a spreadsheet platform controlled by VBA code embedded within the ARENA model. Initial findings from this investigation are that discrete-event simulation can indeed be used to implement agent-based models and with suitable integration elements such as VBA provide the spatial displays associated with agent-based software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A közgazdaságtanban az ágensalapú modellezés egyik alkalmazási területe a makro ökonómia. Ebben a tanulmányban néhány népszerű megtakarítási szabály létét feltételezve adaptív-evolúciós megközelítésben endogén módon próbálunk következtetni e szabályok relatív életképességére. Három különböző típusú ágenst vezetünk be: egy prudens, egy rövidlátó és egy, a permanensjövedelem-elméletnek megfelelően működőt. Rendkívül erős szelekciós nyomás mellett a prudens típus egyértelműen kiszorítja a másik kettőt. A második legéletképesebbnek a rövidlátó típus tűnik, de már közepes szelekciós nyomásnál sem tűnik el egyik típus sem. Szokásos tőkehatékonyság mellett a prudens típus túlzott beruházási tendenciát visz a gazdaságba, és a gazdaság az aranykori megtakarítási rátánál magasabbat ér el. A hitelkorlátok oldása még nagyobb mértékű túlzott beruházáshoz vezethet, a hitelek mennyiségének növekedése mellett a tőketulajdonosok mintegy "kizsákmányoltatják" magukat azokkal, akiknek nincs tőkejövedelmük. A hosszú távú átlagos fogyasztás szempontjából a három típus kiegyensúlyozott aránya adja a legjobb eredményt, ugyanakkor ez jóval nagyobb ingadozással jár, mint amikor csak prudens típusú háztartások léteznek. ____ Agent-based modelling techniques have been employed for some time in macroeconomics. This paper tests some popular saving rules in an adaptive-evolutionary context of looking at their relative survival values. The three types are prudent, short-sighted, and responsive to the permanent-income hypothesis. It is found that where selection pressure is very high, only the prudent type persists. The second most resilient seems to be the short-sighted type, but all three coexist even at medium levels of selection pressure. When the efficiency of capital approaches the level usually assumed in macroeconomics, the prudent type drives the economy towards excessive accumulation of capital, i. e. a long-term savings rate that exceeds the golden rule. If credit constraints are relaxed, this tendency strengthens as credit grows and capital-owners seem to allow themselves to be exploited" by workers. From the angle of average consumption, the best outcome is obtained from a random distribution of types, although this is accompanied by higher volatility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmentally conscious construction has received a significant amount of research attention during the last decades. Even though construction literature is rich in studies that emphasize the importance of environmental impact during the construction phase, most of the previous studies failed to combine environmental analysis with other project performance criteria in construction. This is mainly because most of the studies have overlooked the multi-objective nature of construction projects. In order to achieve environmentally conscious construction, multi-objectives and their relationships need to be successfully analyzed in the complex construction environment. The complex construction system is composed of changing project conditions that have an impact on the relationship between time, cost and environmental impact (TCEI) of construction operations. Yet, this impact is still unknown by construction professionals. Studying this impact is vital to fulfill multiple project objectives and achieve environmentally conscious construction. This research proposes an analytical framework to analyze the impact of changing project conditions on the relationship of TCEI. This study includes green house gas (GHG) emissions as an environmental impact category. The methodology utilizes multi-agent systems, multi-objective optimization, analytical network process, and system dynamics tools to study the relationships of TCEI and support decision-making under the influence of project conditions. Life cycle assessment (LCA) is applied to the evaluation of environmental impact in terms of GHG. The mixed method approach allowed for the collection and analysis of qualitative and quantitative data. Structured interviews of professionals in the highway construction field were conducted to gain their perspectives in decision-making under the influence of certain project conditions, while the quantitative data were collected from the Florida Department of Transportation (FDOT) for highway resurfacing projects. The data collected were used to test the framework. The framework yielded statistically significant results in simulating project conditions and optimizing TCEI. The results showed that the change in project conditions had a significant impact on the TCEI optimal solutions. The correlation between TCEI suggested that they affected each other positively, but in different strengths. The findings of the study will assist contractors to visualize the impact of their decision on the relationship of TCEI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The resilience of a social-ecological system is measured by its ability to retain core functionality when subjected to perturbation. Resilience is contextually dependent on the state of system components, the complex interactions among these components, and the timing, location, and magnitude of perturbations. The stability landscape concept provides a useful framework for considering resilience within the specified context of a particular social-ecological system but has proven difficult to operationalize. This difficulty stems largely from the complex, multidimensional nature of the systems of interest and uncertainty in system response. Agent-based models are an effective methodology for understanding how cross-scale processes within and across social and ecological domains contribute to overall system resilience. We present the results of a stylized model of agricultural land use in a small watershed that is typical of the Midwestern United States. The spatially explicit model couples land use, biophysical models, and economic drivers with an agent-based model to explore the effects of perturbations and policy adaptations on system outcomes. By applying the coupled modeling approach within the resilience and stability landscape frameworks, we (1) estimate the sensitivity of the system to context-specific perturbations, (2) determine potential outcomes of those perturbations, (3) identify possible alternative states within state space, (4) evaluate the resilience of system states, and (5) characterize changes in system-scale resilience brought on by changes in individual land use decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The philosophy of minimalism in robotics promotes gaining an understanding of sensing and computational requirements for solving a task. This minimalist approach lies in contrast to the common practice of first taking an existing sensory motor system, and only afterwards determining how to apply the robotic system to the task. While it may seem convenient to simply apply existing hardware systems to the task at hand, this design philosophy often proves to be wasteful in terms of energy consumption and cost, along with unnecessary complexity and decreased reliability. While impressive in terms of their versatility, complex robots such as the PR2 (which cost hundreds of thousands of dollars) are impractical for many common applications. Instead, if a specific task is required, sensing and computational requirements can be determined specific to that task, and a clever hardware implementation can be built to accomplish the task. Since this minimalist hardware would be designed around accomplishing the specified task, significant reductions in hardware complexity can be obtained. This can lead to huge advantages in battery life, cost, and reliability. Even if cost is of no concern, battery life is often a limiting factor in many applications. Thus, a minimalist hardware system is critical in achieving the system requirements. In this thesis, we will discuss an implementation of a counting, tracking, and actuation system as it relates to ergodic bodies to illustrate a minimalist design methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marine Recirculating Aquaculture Systems (RAS) produce great volume of wastewater, which may be reutilized/recirculated or reutilized after undergoing different treatment/remediation methods, or partly discharged into neighbour water-bodies (DWW). Phosphates, in particular, are usually accumulated at high concentrations in DWW, both because its monitoring is not compulsory for fish production since it is not a limiting parameter, and also because there is no specific treatment so far developed to remove them, especially in what concerns saltwater effluents. As such, this work addresses two main scientific questions. One of them regards the understanding of the actual (bio)remediation methods applied to effluents produced in marine RAS, by identifying their advantages, drawbacks and gaps concerning their exploitation in saltwater effluents. The second one is the development of a new, innovative and efficient method for the treatment of saltwater effluents that potentially fulfil the gaps identified in the conventional treatments. Thereby, the aims of this thesis are: (i) to revise the conventional treatments targeting major contaminants in marine RAS effluents, with a particular focus on the bioremediation approaches already conducted for phosphates; (ii) to characterize and evaluate the potential of oyster-shell waste collected in Ria de Aveiro as a bioremediation agent of phosphates spiked into artificial saltwater, over different influencing factors (e.g., oyster-shell pre-treatment through calcination, particle size, adsorbent concentration). Despite the use of oyster-shells for phosphorous (P) removal has already been applied in freshwater, its biosorptive potential for P in saltwater was never evaluated, as far as I am aware. The results herein generated showed that NOS is mainly composed by carbonates, which are almost completely converted into lime (CaO) after calcination (COS). Such pre-treatment allowed obtaining a more reactive material for P removal, since higher removal percentages and adsorption capacity was observed for COS. Smaller particle size fractions for both NOS and COS samples also increased P removal. Kinetic models showed that NOS adsorption followed, simultaneously, Elovich and Intraparticle Difusion kinetic models, suggesting that P removal is both a diffusional and chemically rate-controlled process. The percentage of P removal by COS was not controlled by Intraparticle Diffusion and the Elovich model was the kinetic model that best fitted phosphate removal. This work demonstrated that waste oyster-shells, either NOS or COS, could be used as an effective biosorbent for P removal from seawater. Thereby, this biomaterial can sustain a cost-effective and eco-friendly bioremediation strategy with potential application in marine RAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major function of this model is to access the UCI Wisconsin Breast Cancer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classification can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artificial Immune Systems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to problem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifically for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based modelling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.