858 resultados para Multi-agent simulation
Resumo:
Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of ≤ 10% was considered uninteresting and, conversely, promising if ≥ 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
Les films de simulations qui accompagnent le document ont été réalisés avec Pymol.
Resumo:
Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed
Resumo:
In many real world contexts individuals find themselves in situations where they have to decide between options of behaviour that serve a collective purpose or behaviours which satisfy one’s private interests, ignoring the collective. In some cases the underlying social dilemma (Dawes, 1980) is solved and we observe collective action (Olson, 1965). In others social mobilisation is unsuccessful. The central topic of social dilemma research is the identification and understanding of mechanisms which yield to the observed cooperation and therefore resolve the social dilemma. It is the purpose of this thesis to contribute this research field for the case of public good dilemmas. To do so, existing work that is relevant to this problem domain is reviewed and a set of mandatory requirements is derived which guide theory and method development of the thesis. In particular, the thesis focusses on dynamic processes of social mobilisation which can foster or inhibit collective action. The basic understanding is that success or failure of the required process of social mobilisation is determined by heterogeneous individual preferences of the members of a providing group, the social structure in which the acting individuals are contained, and the embedding of the individuals in economic, political, biophysical, or other external contexts. To account for these aspects and for the involved dynamics the methodical approach of the thesis is computer simulation, in particular agent-based modelling and simulation of social systems. Particularly conductive are agent models which ground the simulation of human behaviour in suitable psychological theories of action. The thesis develops the action theory HAPPenInGS (Heterogeneous Agents Providing Public Goods) and demonstrates its embedding into different agent-based simulations. The thesis substantiates the particular added value of the methodical approach: Starting out from a theory of individual behaviour, in simulations the emergence of collective patterns of behaviour becomes observable. In addition, the underlying collective dynamics may be scrutinised and assessed by scenario analysis. The results of such experiments reveal insights on processes of social mobilisation which go beyond classical empirical approaches and yield policy recommendations on promising intervention measures in particular.
Resumo:
Este proyecto de investigación busca usar un sistema de cómputo basado en modelación por agentes para medir la percepción de marca de una organización en una población heterogénea. Se espera proporcionar información que permita dar soluciones a una organización acerca del comportamiento de sus consumidores y la asociada percepción de marca. El propósito de este sistema es el de modelar el proceso de percepción-razonamiento-acción para simular un proceso de razonamiento como el resultado de una acumulación de percepciones que resultan en las acciones del consumidor. Este resultado definirá la aceptación de marca o el rechazo del consumidor hacia la empresa. Se realizó un proceso de recolección información acerca de una organización específica en el campo de marketing. Después de compilar y procesar la información obtenida de la empresa, el análisis de la percepción de marca es aplicado mediante procesos de simulación. Los resultados del experimento son emitidos a la organización mediante un informe basado en conclusiones y recomendaciones a nivel de marketing para mejorar la percepción de marca por parte de los consumidores.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
In this thesis, we propose a novel approach to model the diffusion of residential PV systems. For this purpose, we use an agent-based model where agents are the families living in the area of interest. The case study is the Emilia-Romagna Regional Energy plan, which aims to increase the produc- tion of electricity from renewable energy. So, we study the microdata from the Survey on Household Income and Wealth (SHIW) provided by Bank of Italy in order to obtain the characteristics of families living in Emilia-Romagna. These data have allowed us to artificial generate families and reproduce the socio-economic aspects of the region. The families generated by means of a software are placed on the virtual world by associating them with the buildings. These buildings are acquired by analysing the vector data of regional buildings made available by the region. Each year, the model determines the level of diffusion by simulating the installed capacity. The adoption behaviour is influenced by social interactions, household’s economic situation, the environmental benefits arising from the adoption and the payback period of the investment.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
Master production schedule (MPS) plays an important role in an integrated production planning system. It converts the strategic planning defined in a production plan into the tactical operation execution. The MPS is also known as a tool for top management to control over manufacture resources and becomes input of the downstream planning levels such as material requirement planning (MRP) and capacity requirement planning (CRP). Hence, inappropriate decision on the MPS development may lead to infeasible execution, which ultimately causes poor delivery performance. One must ensure that the proposed MPS is valid and realistic for implementation before it is released to real manufacturing system. In practice, where production environment is stochastic in nature, the development of MPS is no longer simple task. The varying processing time, random event such as machine failure is just some of the underlying causes of uncertainty that may be hardly addressed at planning stage so that in the end the valid and realistic MPS is tough to be realized. The MPS creation problem becomes even more sophisticated as decision makers try to consider multi-objectives; minimizing inventory, maximizing customer satisfaction, and maximizing resource utilization. This study attempts to propose a methodology for MPS creation which is able to deal with those obstacles. This approach takes into account uncertainty and makes trade off among conflicting multi-objectives at the same time. It incorporates fuzzy multi-objective linear programming (FMOLP) and discrete event simulation (DES) for MPS development.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^
Resumo:
The primary hypothesis stated by this paper is that the use of social choice theory in Ambient Intelligence systems can improve significantly users satisfaction when accessing shared resources. A research methodology based on agent based social simulations is employed to support this hypothesis and to evaluate these benefits. The result is a six-fold contribution summarized as follows. Firstly, several considerable differences between this application case and the most prominent social choice application, political elections, have been found and described. Secondly, given these differences, a number of metrics to evaluate different voting systems in this scope have been proposed and formalized. Thirdly, given the presented application and the metrics proposed, the performance of a number of well known electoral systems is compared. Fourthly, as a result of the performance study, a novel voting algorithm capable of obtaining the best balance between the metrics reviewed is introduced. Fifthly, to improve the social welfare in the experiments, the voting methods are combined with cluster analysis techniques. Finally, the article is complemented by a free and open-source tool, VoteSim, which ensures not only the reproducibility of the experimental results presented, but also allows the interested reader to adapt the case study presented to different environments.
Resumo:
Pipeline transport represents one of the most important means of moving oil derivatives to different locations. It is both a reliable and inexpensive means of transport, and it yields small variable costs along with a great degree of reliability. Pipeline scheduling is not a trivial task; it involves considerable time from schedulers. Discussed here is a real-application case of a tool that helps schedulers simulate pipeline performance as a means of creating a feasible schedule for a particular time span.