13 resultados para Multi-agent computing
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Resumo:
Starting off from the usual language of modal logic for multi-agent systems dealing with the agents’ knowledge/belief and common knowledge/belief we define so-called epistemic Kripke structures for intu- itionistic (common) knowledge/belief. Then we introduce corresponding deductive systems and show that they are sound and complete with respect to these semantics.
Resumo:
BACKGROUND Pleomorphic rhabdomyosarcoma (RMS) is a rare sub-type of RMS. Optimal treatment remains undefined. PATIENTS AND METHODS Between 1995 and 2014, 45 patients were diagnosed and treated in three tertiary sarcoma Centers (United Kingdom, Switzerland and Germany). Treatment characteristics and outcomes were analyzed. RESULTS The median age at diagnosis was 71.5 years (range=28.4-92.8 years). Median survival for those with localised (n=32, 71.1%) and metastatic disease (n=13, 28.9%) were 12.8 months (95% confidence interval=8.2-34.4) and 7.1 months (95% confidence interval=3.8-11.3) respectively. The relapse rate was 53.8% (four local and 10 distant relapses). In total, 14 (31.1%) patients received first line palliative chemotherapy including multi-agent paediatric chemotherapy schedules (n=3), ifosfamide-doxorubicin (n=4) and single-agent doxorubicin (n=7). Response to chemotherapy was poor (one partial remission with vincristine-actinomycin D-cyclophosphamide and six cases with stable disease). Median progression-free survival was 2.3 (range=1.2-7.3) months. CONCLUSION Pleomorphic RMS is an aggressive neoplasm mainly affecting older patients, associated with a high relapse rate, a poor and short-lived response to standard chemotherapy and an overall poor prognosis for both localised and metastatic disease.
Resumo:
What was I working on before the weekend? and What were the members of my team working on during the last week? are common questions that are frequently asked by a developer. They can be answered if one keeps track of who changes what in the source code. In this work, we present Replay, a tool that allows one to replay past changes as they happened at a fine-grained level, where a developer can watch what she has done or understand what her colleagues have done in past development sessions. With this tool, developers are able to not only understand what sequence of changes brought the system to a certain state (e.g., the introduction of a defect), but also deduce reasons for why her colleagues performed those changes. One of the applications of such a tool is also discovering the changes that broke the code of a developer.
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
BACKGROUND AND PURPOSE Multi-phase postmortem CT angiography (MPMCTA) is increasingly being recognized as a valuable adjunct medicolegal tool to explore the vascular system. Adequate interpretation, however, requires knowledge about the most common technique-related artefacts. The purpose of this study was to identify and index the possible artefacts related to MPMCTA. MATERIAL AND METHODS An experienced radiologist blinded to all clinical and forensic data retrospectively reviewed 49 MPMCTAs. Each angiographic phase, i.e. arterial, venous and dynamic, was analysed separately to identify phase-specific artefacts based on location and aspect. RESULTS Incomplete contrast filling of the cerebral venous system was the most commonly encountered artefact, followed by contrast agent layering in the lumen of the thoracic aorta. Enhancement or so-called oedematization of the digestive system mucosa was also frequently observed. CONCLUSION All MPMCTA artefacts observed and described here are reproducible and easily identifiable. Knowledge about these artefacts is important to avoid misinterpreting them as pathological findings.
Resumo:
BACKGROUND Antifibrinolytics have been used for 2 decades to reduce bleeding in cardiac surgery. MDCO-2010 is a novel, synthetic, serine protease inhibitor. We describe the first experience with this drug in patients. METHODS In this phase II, double-blind, placebo-controlled study, 32 patients undergoing isolated primary coronary artery bypass grafting with cardiopulmonary bypass were randomly assigned to 1 of 5 increasing dosage groups of MDCO-2010. The primary aim was to evaluate pharmacokinetics (PK) with assessment of plasmatic concentrations of the drug, short-term safety, and tolerance of MDCO-2010. Secondary end points were influence on coagulation, chest tube drainage, and transfusion requirements. RESULTS PK analysis showed linear dosage-proportional correlation between MDCO-2010 infusion rate and PK parameters. Blood loss was significantly reduced in the 3 highest dosage groups compared with control (P = 0.002, 0.004 and 0.011, respectively). The incidence of allogeneic blood product transfusions was lower with MDCO-2010 4/24 (17%) vs 4/8 (50%) in the control group. MDCO-2010 exhibited dosage-dependent antifibrinolytic effects through suppression of D-dimer generation and inhibition of tissue plasminogen activator-induced lysis in ROTEM analysis as well as anticoagulant effects demonstrated by prolongation of activated clotting time and activated partial thromboplastin time. No systematic differences in markers of end organ function were observed among treatment groups. Three patients in the MDCO-2010 groups experienced serious adverse events. One patient experienced intraoperative thrombosis of venous grafts considered possibly related to the study drug. No reexploration for mediastinal bleeding was required, and there were no deaths. CONCLUSIONS This first-in-patient study demonstrated dosage-proportional PK for MDCO-2010 and reduction of chest tube drainage and transfusions in patients undergoing primary coronary artery bypass grafting. Antifibrinolytic and anticoagulant effects were demonstrated using various markers of coagulation. MDCO-2010 was well tolerated and showed an acceptable initial safety profile. Larger multi-institutional studies are warranted to further investigate the safety and efficacy of this compound.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.