27 resultados para Distributed parameter systems
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
The Doctoral Workshop on Distributed Systems was held at Les Plans-sur-Bex, Switzerland, from June 26-28, 2013. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
The Doctoral Workshop on Distributed Systems has been held at Kandersteg, Switzerland, from June 3-5, 2014. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop.
Resumo:
In 2011, the Tumour Node Metastasis (TNM) staging system still remains the gold standard for stratifying colorectal cancer (CRC) patients into prognostic subgroups, and is considered a solid basis for treatment management. Nevertheless, there is still a challenge with regard to therapeutic strategy; stage II patients are not typically selected for postoperative adjuvant chemotherapy, although some stage II patients have a comparable outcome to stage III patients who, themselves do receive such treatment. Consequently, there has been an inundation of 'prognostic biomarker' studies aiming to improve the prognostic stratification power of the TNM staging system. Most proposed biomarkers are not implemented because of lack of reproducibility, validation and standardisation. This problem can be partially resolved by following the REMARK guidelines. In search of novel prognostic factors for patients with CRC, one might glance at a table in the book entitled 'Prognostic Factors in Cancer' published by the International Union against Cancer (UICC) in 2006, in which TNM stage, L and V classifications are considered 'essential' prognostic factors, whereas tumour grade, perineural invasion, tumour budding and tumour-border configuration among others are proposed as 'additional' prognostic factors. Histopathology reports normally include the 'essential' features and are accompanied by tumour grade, histological subtype and information on perineural invasion, but interestingly, the tumour-border configuration (i.e., growth pattern) and especially tumour budding are rarely reported. Although scoring systems such as the 'BRE' in breast and 'Gleason' in prostate cancer are solidly based on histomorphological features and used in daily practice, no such additional scoring system to complement TNM staging is available for CRC. Regardless of differences in study design and methods for tumour-budding assessment, the prognostic power of tumour budding has been confirmed by dozens of study groups worldwide, suggesting that tumour budding may be a valuable candidate for inclusion into a future prognostic scoring system for CRC. This mini-review therefore attempts to present a short and concise overview on tumour budding, including morphological, molecular and prognostic aspects underlining its inter-disciplinary relevance.
Resumo:
Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.
Resumo:
Software metrics offer us the promise of distilling useful information from vast amounts of software in order to track development progress, to gain insights into the nature of the software, and to identify potential problems. Unfortunately, however, many software metrics exhibit highly skewed, non-Gaussian distributions. As a consequence, usual ways of interpreting these metrics --- for example, in terms of "average" values --- can be highly misleading. Many metrics, it turns out, are distributed like wealth --- with high concentrations of values in selected locations. We propose to analyze software metrics using the Gini coefficient, a higher-order statistic widely used in economics to study the distribution of wealth. Our approach allows us not only to observe changes in software systems efficiently, but also to assess project risks and monitor the development process itself. We apply the Gini coefficient to numerous metrics over a range of software projects, and we show that many metrics not only display remarkably high Gini values, but that these values are remarkably consistent as a project evolves over time.
Resumo:
We describe a system for performing SLA-driven management and orchestration of distributed infrastructures composed of services supporting mobile computing use cases. In particular, we focus on a Follow-Me Cloud scenario in which we consider mobile users accessing cloud-enable services. We combine a SLA-driven approach to infrastructure optimization, with forecast-based performance degradation preventive actions and pattern detection for supporting mobile cloud infrastructure management. We present our system's information model and architecture including the algorithmic support and the proposed scenarios for system evaluation.
Resumo:
The intention of an authentication and authorization infrastructure (AAI) is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO) across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP). Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.
Resumo:
We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.
Resumo:
Manure scrapers are widely used in dairy cow loose-housing systems. In order to evaluate the effects of the scrapers on the cows, we assessed their impact on the animals' cardiac activity, feeding behaviour, and the behavioural reactions of cows confronted with different types of scrapers. In part I of the study, we measured cardiac activity (mean R–R interval and RMSSD, a parameter of heart-rate variability) whilst observing the behaviour of 29 focal cows on three farms during situations with and without active manure scrapers. Lower RMSSD values were observed during scraping events while cows were either lying, standing or walking in the alleyway, standing completely in the lying cubicle, or standing half in the lying cubicle (P=0.03), but only tended to differ while directly confronted with the scraper (P=0.06). This indicates that dairy cows experienced at least some mild stress during manure-scraping events. In part II, the feeding behaviour of 12 cows on each of two farms was recorded by means of a jaw-movement sensor and compared between situations with the manure-scraping event following forage provision either within or outside the main daily feeding period (i.e. within 1 or after 2 h from forage provisioning, respectively). The duration of night-time feeding (P=0.049) and the number of feeding bouts (P=0.036) were higher when a manure-scraping event took place within the main daily feeding period, indicating that the cows' feeding behaviour had been disturbed. In part III, we observed the cows' behaviour on 15 farms during eight manure scraping events per farm, where each of five farms had one of three different scraper types. We assessed the cows' immediate reactions when confronted with the scraper. In addition, we recorded the number of animals present in the alleyways before and after the manure-scraping events. The more cows that were present in the alleyways before the scraping event, the lower the proportion of cows showing direct behavioural reactions both with (P=0.017) and without (P=0.028) scraper contact, and the higher the number of cows that left the alleyways (P<0.001). Scraper type did not influence the proportion of cows showing behavioural reactions. In conclusion, our results show that dairy cows perceive the manure-scraping event negatively in some situations, that feeding behaviour may be disturbed when scrapers are active during the main feeding period, and that cows avoid the scraper during crowded situations.
Resumo:
In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.