58 resultados para software creation infrastructure
Resumo:
The experimental implementation of a quantum algorithm requires the decomposition of unitary operators. Here we treat unitary-operator decomposition as an optimization problem, and use a genetic algorithm-a global-optimization method inspired by nature's evolutionary process-for operator decomposition. We apply this method to NMR quantum information processing, and find a probabilistic way of performing universal quantum computation using global hard pulses. We also demonstrate the efficient creation of the singlet state (a special type of Bell state) directly from thermal equilibrium, using an optimum sequence of pulses.
Resumo:
Genetic Algorithm for Rule-set Prediction (GARP) and Support Vector Machine (SVM) with free and open source software (FOSS) - Open Modeller were used to model the probable landslide occurrence points. Environmental layers such as aspect, digital elevation, flow accumulation, flow direction, slope, land cover, compound topographic index and precipitation have been used in modeling. Simulated output of these techniques is validated with the actual landslide occurrence points, which showed 92% (GARP) and 96% (SVM) accuracy considering precipitation in the wettest month and 91% and 94% accuracy considering precipitation in the wettest quarter of the year.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.
Resumo:
Video decoders used in emerging applications need to be flexible to handle a large variety of video formats and deliver scalable performance to handle wide variations in workloads. In this paper we propose a unified software and hardware architecture for video decoding to achieve scalable performance with flexibility. The light weight processor tiles and the reconfigurable hardware tiles in our architecture enable software and hardware implementations to co-exist, while a programmable interconnect enables dynamic interconnection of the tiles. Our process network oriented compilation flow achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi-processor, semi hardware and full hardware implementations of a video decoder. An application quality of service aware scheduler monitors and controls the operation of the entire system. We prove the concept through a prototype of the architecture on an off-the-shelf FPGA. The FPGA prototype shows a scaling in performance from QCIF to 1080p resolutions in four discrete steps. We also demonstrate that the reconfiguration time is short enough to allow migration from one configuration to the other without any frame loss.
Resumo:
We propose a Cooperative Opportunistic Automatic Repeat ReQuest (CoARQ) scheme to solve the HOL-blocking problem in infrastructure IEEE 802.11 WLANs. HOL blocking occurs when the head-of-the-line packet at the Access Point (AP) queue blocks the transmission of packets to other destinations resulting in severe throughput degradation. When the AP transmits a packet to a mobile station (STA), some of the nodes in the vicinity can overhear this packet transmission successfully. If the original transmission by the AP is unsuccessful, our CoARQ scheme chooses the station. STA or AP) with the best channel to the intended receiver as a relay and the chosen relay forwards the AP's packet to the receiver. This way, our scheme removes the bottleneck at the AP, thereby providing significant improvements in the throughput of the AP. We analyse the performance of our scheme in an infrastructure WLAN under a TCP controlled file download scenario and our analytical results are further validated by extensive simulations.
Resumo:
Using a recently developed strong-coupling method, we present a comprehensive theory for doublon production processes in modulation spectroscopy of a three-dimensional system of ultracold fermionic atoms in an optical lattice with a trap. The theoretical predictions compare well to the experimental time traces of doublon production. For experimentally feasible conditions, we provide a quantitative prediction for the presence of a nonlinear ``two-photon'' excitation at strong modulation amplitudes.
Resumo:
The goal of the work reported in this paper is to use automated, combinatorial synthesis to generate alternative solutions to be used as stimuli by designers for ideation. FuncSION, a computational synthesis tool that can automatically synthesize solution concepts for mechanical devices by combining building blocks from a library, is used for this purpose. The objectives of FuncSION are to help generate a variety of functional requirements for a given problem and a variety of concepts to fulfill these functions. A distinctive feature of FuncSION is its focus on automated generation of spatial configurations, an aspect rarely addressed by other computational synthesis programs. This paper provides an overview of FuncSION in terms of representation of design problems, representation of building blocks, and rules with which building blocks are combined to generate concepts at three levels of abstraction: topological, spatial, and physical. The paper then provides a detailed account of evaluating FuncSION for its effectiveness in providing stimuli for enhanced ideation.
Resumo:
FreeRTOS is an open-source real-time microkernel that has a wide community of users. We present the formal specification of the behaviour of the task part of FreeRTOS that deals with the creation, management, and scheduling of tasks using priority-based preemption. Our model is written in the Z notation, and we verify its consistency using the Z/Eves theorem prover. This includes a precise statement of the preconditions for all API commands. This task model forms the basis for three dimensions of further work: (a) the modelling of the rest of the behaviour of queues, time, mutex, and interrupts in FreeRTOS; (b) refinement of the models to code to produce a verified implementation; and (c) extension of the behaviour of FreeRTOS to multi-core architectures. We propose all three dimensions as benchmark challenge problems for Hoare's Verified Software Initiative.
Resumo:
The problem of delay-constrained, energy-efficient broadcast in cooperative wireless networks is NP-complete. While centralised setting allows some heuristic solutions, designing heuristics in distributed implementation poses significant challenges. This is more so in wireless sensor networks (WSNs) where nodes are deployed randomly and topology changes dynamically due to node failure/join and environment conditions. This paper demonstrates that careful design of network infrastructure can achieve guaranteed delay bounds and energy-efficiency, and even meet quality of service requirements during broadcast. The paper makes three prime contributions. First, we present an optimal lower bound on energy consumption for broadcast that is tighter than what has been previously proposed. Next, iSteiner, a lightweight, distributed and deterministic algorithm for creation of network infrastructure is discussed. iPercolate is the algorithm that exploits this structure to cooperatively broadcast information with guaranteed delivery and delay bounds, while allowing real-time traffic to pass undisturbed.
Resumo:
As the volume of data relating to proteins increases, researchers rely more and more on the analysis of published data, thus increasing the importance of good access to these data that vary from the supplemental material of individual articles, all the way to major reference databases with professional staff and long-term funding. Specialist protein resources fill an important middle ground, providing interactive web interfaces to their databases for a focused topic or family of proteins, using specialized approaches that are not feasible in the major reference databases. Many are labors of love, run by a single lab with little or no dedicated funding and there are many challenges to building and maintaining them. This perspective arose from a meeting of several specialist protein resources and major reference databases held at the Wellcome Trust Genome Campus (Cambridge, UK) on August 11 and 12, 2014. During this meeting some common key challenges involved in creating and maintaining such resources were discussed, along with various approaches to address them. In laying out these challenges, we aim to inform users about how these issues impact our resources and illustrate ways in which our working together could enhance their accuracy, currency, and overall value. Proteins 2015; 83:1005-1013. (c) 2015 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.
Resumo:
Background: Computational protein design is a rapidly maturing field within structural biology, with the goal of designing proteins with custom structures and functions. Such proteins could find widespread medical and industrial applications. Here, we have adapted algorithms from the Rosetta software suite to design much larger proteins, based on ideal geometric and topological criteria. Furthermore, we have developed techniques to incorporate symmetry into designed structures. For our first design attempt, we targeted the (alpha/beta)(8) TIM barrel scaffold. We gained novel insights into TIM barrel folding mechanisms from studying natural TIM barrel structures, and from analyzing previous TIM barrel design attempts. Methods: Computational protein design and analysis was performed using the Rosetta software suite and custom scripts. Genes encoding all designed proteins were synthesized and cloned on the pET20-b vector. Standard circular dichroism and gel chromatographic experiments were performed to determine protein biophysical characteristics. 1D NMR and 2D HSQC experiments were performed to determine protein structural characteristics. Results: Extensive protein design simulations coupled with ab initio modeling yielded several all-atom models of ideal, 4-fold symmetric TIM barrels. Four such models were experimentally characterized. The best designed structure (Symmetrin-1) contained a polar, histidine-rich pore, forming an extensive hydrogen bonding network. Symmetrin-1 was easily expressed and readily soluble. It showed circular dichroism spectra characteristic of well-folded alpha/beta proteins. Temperature melting experiments revealed cooperative and reversible unfolding, with a T-m of 44 degrees C and a Gibbs free energy of unfolding (Delta G degrees) of 8.0 kJ/mol. Urea denaturing experiments confirmed these observations, revealing a C-m of 1.6 M and a Delta G degrees of 8.3 kJ/mol. Symmetrin-1 adopted a monomeric conformation, with an apparent molecular weight of 32.12 kDa, and displayed well resolved 1D-NMR spectra. However, the HSQC spectrum revealed somewhat molten characteristics. Conclusions: Despite the detection of molten characteristics, the creation of a soluble, cooperatively folding protein represents an advancement over previous attempts at TIM barrel design. Strategies to further improve Symmetrin-1 are elaborated. Our techniques may be used to create other large, internally symmetric proteins.