384 resultados para distributed model
Resumo:
This work-in-progress paper presents an ensemble-based model for detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, and its partial implementation. The model utilises network traffic analysis and MIB (Management Information Base) server load analysis features for detecting a wide range of network and application layer DDoS attacks and distinguishing them from Flash Events. The proposed model will be evaluated against realistic synthetic network traffic generated using a software-based traffic generator that we have developed as part of this research. In this paper, we summarise our previous work, highlight the current work being undertaken along with preliminary results obtained and outline the future directions of our work.
Resumo:
The symbolic and improvisational nature of Livecoding requires a shared networking framework to be flexible and extensible, while at the same time providing support for synchronisation, persistence and redundancy. Above all the framework should be robust and available across a range of platforms. This paper proposes tuple space as a suitable framework for network communication in ensemble livecoding contexts. The role of tuple space as a concurrency framework and the associated timing aspects of the tuple space model are explored through Spaces, an implementation of tuple space for the Impromptu environment.
Resumo:
Deciding the appropriate population size and number of is- lands for distributed island-model genetic algorithms is often critical to the algorithm’s success. This paper outlines a method that automatically searches for good combinations of island population sizes and the number of islands. The method is based on a race between competing parameter sets, and collaborative seeding of new parameter sets. This method is applicable to any problem, and makes distributed genetic algorithms easier to use by reducing the number of user-set parameters. The experimental results show that the proposed method robustly and reliably finds population and islands settings that are comparable to those found with traditional trial-and-error approaches.
Resumo:
Distributed Genetic Algorithms (DGAs) designed for the Internet have to take its high communication cost into consideration. For island model GAs, the migration topology has a major impact on DGA performance. This paper describes and evaluates an adaptive migration topology optimizer that keeps the communication load low while maintaining high solution quality. Experiments on benchmark problems show that the optimized topology outperforms static or random topologies of the same degree of connectivity. The applicability of the method on real-world problems is demonstrated on a hard optimization problem in VLSI design.
Resumo:
The behaviour of single installations of solar energy systems is well understood; however, what happens at an aggregated location, such as a distribution substation, when output of groups of installations cumulate is not so well understood. This paper considers groups of installations attached to distributions substations on which the load is primarily commercial and industrial. Agent-based modelling has been used to model the physical electrical distribution system and the behaviour of equipment outputs towards the consumer end of the network. The paper reports the approach used to simulate both the electricity consumption of groups of consumers and the output of solar systems subject to weather variability with the inclusion of cloud data from the Bureau of Meteorology (BOM). The data sets currently used are for Townsville, North Queensland. The initial characteristics that indicate whether solar installations are cost effective from an electricity distribution perspective are discussed.
Resumo:
Downtime (DT) caused by non-availability of equipment and equipment breakdown has non-trivial impact on the performance of construction projects. Earlier research has often addressed this fact, but it has rarely explained the causes and consequences of DT – especially in the context of developing countries. This paper presents a DT model to address this issue. Using this model, the generic factors and processes related to DT are identified, and the impact of DT is quantified. By applying the model framework to nine road projects in Nepal, the impact of DT is explored in terms of its duration and cost. The research findings highlight how various factors and processes interact with each other to create DT, and mitigate or exacerbate its impact on project performance. It is suggested that construction companies need to adopt proactive equipment management and maintenance programs to minimize the impact of DT.
Resumo:
Enterprise Systems (ES) can be understood as the de facto standard for holistic operational and managerial support within an organization. Most commonly ES are offered as commercial off-the-shelf packages, requiring customization in the user organization. This process is a complex and resource-intensive task, which often prevents small and midsize enterprises (SME) from undertaking configuration projects. Especially in the SME market independent software vendors provide pre-configured ES for a small customer base. The problem of ES configuration is shifted from the customer to the vendor, but remains critical. We argue that the yet unexplored link between process configuration and business document configuration must be closer examined as both types of configuration are closely tied to one another.
Resumo:
The study presented here applies the highly parameterised semi-distributed U.S. Department of Agriculture Soil and Water Assessment Tool (SWAT) to an Australian subtropical catchment. SWAT has been applied to numerous catchments worldwide and is considered to be a useful tool that is under ongoing development with contributions coming from different research groups in different parts of the world. In a preliminary run the SWAT model application for the Elimbah Creek catchment has estimated water yield for the catchment and has quantified the different sources. For the modelling period of April 1999 to September 2009 the results show that the main sources of water in Elimbah Creek are total surface runoff and lateral flow (65%). Base-flow contributes 36% to the total runoff. On a seasonal basis modelling results show a shift in the source of water contributing to Elimbah Creek from surface runoff and lateral flow during intense summer storms to base-flow conditions during dry months. Further calibration and validation of these results will confirm that SWAT provides an alternative to Australian water balance models.
Resumo:
We examine which capabilities technologies provide to support collaborative process modeling. We develop a model that explains how technology capabilities impact cognitive group processes, and how they lead to improved modeling outcomes and positive technology beliefs. We test this model through a free simulation experiment of collaborative process modelers structured around a set of modeling tasks. With our study, we provide an understanding of the process of collaborative process modeling, and detail implications for research and guidelines for the practical design of collaborative process modeling.
Resumo:
The influence of pH on interfacial energy and wettability distributed over the phospholipid bilayer surface were studied, and the importance of cartilage hydrophobicity (wettability) on the coefficient of friction (f) was established. It is argued that the wettability of cartilage signifi antly depends on the number of phospholipid bilayers acting as solid lubricant; the hypothesis was proven by conducting friction tests with normal and lipid- depleted cartilage samples. A lamellar-roller-bearing lubrication model was devised involving two mechanisms: (i) lamellar frictionless movement of bilayers, and (ii) roller-bearing lubrication mode through structured synovial fluid, which operates when lamellar spheres, liposomes and macromolecules act like a roller-bearing situated between two cartilage surfaces in effective biological lubrication.
Resumo:
A multi-segment foot model was used to develop an accurate and reliable kinematic model to describe in-shoe foot kinematics during gait.
Resumo:
Application of 'advanced analysis' methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. A primary objective was to produce a comprehensive range of new distributed plasticity analytical benchmark solutions for verification of the concentrated plasticity methods. A distributed plasticity model was developed using shell finite elements to explicitly account for the effects of gradual yielding and spread of plasticity, initial geometric imperfections, residual stresses and local buckling deformations. The model was verified by comparison with large-scale steel frame test results and a variety of existing analytical benchmark solutions. This paper presents a description of the distributed plasticity model and details of the verification study.
Resumo:
This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.
Resumo:
This thesis presents a multi-criteria optimisation study of group replacement schedules for water pipelines, which is a capital-intensive and service critical decision. A new mathematical model was developed, which minimises total replacement costs while maintaining a satisfactory level of services. The research outcomes are expected to enrich the body of knowledge of multi-criteria decision optimisation, where group scheduling is required. The model has the potential to optimise replacement planning for other types of linear asset networks resulting in bottom-line benefits for end users and communities. The results of a real case study show that the new model can effectively reduced the total costs and service interruptions.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution