74 resultados para OpenFlow, SDN, Software-Defined Networking, Cloud
Resumo:
Clouds are the largest source of uncertainty in climate science, and remain a weak link in modeling tropical circulation. A major challenge is to establish connections between particulate microphysics and macroscale turbulent dynamics in cumulus clouds. Here we address the issue from the latter standpoint. First we show how to create bench-scale flows that reproduce a variety of cumulus-cloud forms (including two genera and three species), and track complete cloud life cycles-e.g., from a ``cauliflower'' congestus to a dissipating fractus. The flow model used is a transient plume with volumetric diabatic heating scaled dynamically to simulate latent-heat release from phase changes in clouds. Laser-based diagnostics of steady plumes reveal Riehl-Malkus type protected cores. They also show that, unlike the constancy implied by early self-similar plume models, the diabatic heating raises the Taylor entrainment coefficient just above cloud base, depressing it at higher levels. This behavior is consistent with cloud-dilution rates found in recent numerical simulations of steady deep convection, and with aircraft-based observations of homogeneous mixing in clouds. In-cloud diabatic heating thus emerges as the key driver in cloud development, and could well provide a major link between microphysics and cloud- scale dynamics.
Resumo:
In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.
Resumo:
A customer reported problem (or Trouble Ticket) in software maintenance is typically solved by one or more maintenance engineers. The decision of allocating the ticket to one or more engineers is generally taken by the lead, based on customer delivery deadlines and a guided complexity assessment from each maintenance engineer. The key challenge in such a scenario is two folds, un-truthful (hiked up) elicitation of ticket complexity by each engineer to the lead and the decision of allocating the ticket to a group of engineers who will solve the ticket with in customer deadline. The decision of allocation should ensure Individual and Coalitional Rationality along with Coalitional Stability. In this paper we use game theory to examine the issue of truthful elicitation of ticket complexities by engineers for solving ticket as a group given a specific customer delivery deadline. We formulate this problem as strategic form game and propose two mechanisms, (1) Division of Labor (DOL) and (2) Extended Second Price (ESP). In the proposed mechanisms we show that truth telling by each engineer constitutes a Dominant Strategy Nash Equilibrium of the underlying game. Also we analyze the existence of Individual Rationality (IR) and Coalitional Rationality (CR) properties to motivate voluntary and group participation. We use Core, solution concept from co-operative game theory to analyze the stability of the proposed group based on the allocation and payments.
Resumo:
In this paper we propose the architecture of a SoC fabric onto which applications described in a HLL are synthesized. The fabric is a homogeneous layout of computation, storage and communication resources on silicon. Through a process of composition of resources (as opposed to decomposition of applications), application specific computational structures are defined on the fabric at runtime to realize different modules of the applications in hardware. Applications synthesized on this fabric offers performance comparable to ASICs while retaining the programmability of processing cores. We outline the application synthesis methodology through examples, and compare our results with software implementations on traditional platforms with unbounded resources.
Resumo:
Today 80 % of the content on the Web is in English, which is spoken by only 8% of the World population and 5% of Indian population. There is wealth of useful content in the various languages of the world other than English, which can be made available on the Internet. But, to date, for various reasons most of it is not yet available on the Internet. India itself has 18 officially recognized languages and scores of dialects. Although the medium of instruction for most of the higher education and research in India is English, substantial amount of literature by way of novels, textbooks, scholarly information are being generated in the other languages in the country. Many of the e-governance initiatives are in the respective state languages. In the past, support for different languages by the operating systems and the software packages were not very encouraging. However, with the advent of Unicode technology, operating systems and software packages are supporting almost all the major languages of the world that have scripts. In the work reported in this paper, we have explained the configuration changes that are needed for Eprints.org software to store multilingual content and to create a multilingual user interface.
Resumo:
Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.
Resumo:
In this paper we report on the outcomes of a research and demonstration project on human intrusion detection in a large secure space using an ad hoc wireless sensor network. This project has been a unique experience in collaborative research, involving ten investigators (with expertise in areas such as sensors, circuits, computer systems,communication and networking, signal processing and security) to execute a large funded project that spanned three to four years. In this paper we report on the specific engineering solution that was developed: the various architectural choices and the associated specific designs. In addition to developing a demonstrable system, the various problems that arose have given rise to a large amount of basic research in areas such as geographical packet routing, distributed statistical detection, sensors and associated circuits, a low power adaptive micro-radio, and power optimising embedded systems software. We provide an overview of the research results obtained.