120 resultados para Computing cost
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
Zinc oxide nanorods (ZnO NRs) have been synthesized on flexible substrates by adopting a new and novel three-step process. The as-grown ZnO NRs are vertically aligned and have excellent chemical stoichiometry between its constituents. The transmission electron microscopic studies show that these NR structures are single crystalline and grown along the < 001 > direction. The optical studies show that these nanostructures have a direct optical band gap of about 3.34 eV. Therefore, the proposed methodology for the synthesis of vertically aligned NRs on flexible sheets launches a new route in the development of low-cost flexible devices. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
A robust suboptimal reentry guidance scheme is presented for a reusable launch vehicle using the recently developed, computationally efficient model predictive static programming. The formulation uses the nonlinear vehicle dynamics with a spherical and rotating Earth, hard constraints for desired terminal conditions, and an innovative cost function having several components with associated weighting factors that can account for path and control constraints in a soft constraint manner, thereby leading to smooth solutions of the guidance parameters. The proposed guidance essentially shapes the trajectory of the vehicle by computing the necessary angle of attack and bank angle that the vehicle should execute. The path constraints are the structural load constraint, thermal load constraint, bounds on the angle of attack, and bounds on the bank angle. In addition, the terminal constraints include the three-dimensional position and velocity vector components at the end of the reentry. Whereas the angle-of-attack command is generated directly, the bank angle command is generated by first generating the required heading angle history and then using it in a dynamic inversion loop considering the heading angle dynamics. Such a two-loop synthesis of bank angle leads to better management of the vehicle trajectory and avoids mathematical complexity as well. Moreover, all bank angle maneuvers have been confined to the middle of the trajectory and the vehicle ends the reentry segment with near-zero bank angle, which is quite desirable. It has also been demonstrated that the proposed guidance has sufficient robustness for state perturbations as well as parametric uncertainties in the model.
Resumo:
Energy harvesting sensor nodes are gaining popularity due to their ability to improve the network life time and are becoming a preferred choice supporting green communication. In this paper, we focus on communicating reliably over an additive white Gaussian noise channel using such an energy harvesting sensor node. An important part of this paper involves appropriate modeling of energy harvesting, as done via various practical architectures. Our main result is the characterization of the Shannon capacity of the communication system. The key technical challenge involves dealing with the dynamic (and stochastic) nature of the (quadratic) cost of the input to the channel. As a corollary, we find close connections between the capacity achieving energy management policies and the queueing theoretic throughput optimal policies.
Resumo:
Workplace noise has become one of the major issues in industry not only because of workers’ health but also due to safety. Electric motors, particularly, inverter fed induction motors emit objectionably high levels of noise. This has led to the emergence of a research area, concerned with measurement and mitigation of the acoustic noise. This paper presents a lowcost option for measurement and spectral analysis of acoustic noise emitted by electric motors. The system consists of an electret microphone, amplifier and filter. It makes use of the windows sound card and associated software for data acquisition and analysis. The measurement system is calibrated using a professional sound level meter. Acoustic noise measurements are made on an induction motor drive using the proposed system as per relevant international standards. These measurements are seen to match closely with those of a professional meter.
Resumo:
In the search for newer distributed phases that can be used in Ni-composite coatings, inexpensive and naturally available pumice has been identified as a potential candidate material. The composition of the pumice mineral as determined by Rietveld analysis shows the presence of corundum, quartz, mulllite, moganite and coesite phases. Pumice stone is crushed, ball-milled, dried and dispersed in a nickel sulfamate bath and Ni-pumice coatings are electrodeposited at different current densities and magnetic agitation speeds. Pumice particles are uniformly incorporated in the nickel matrix and Ni-pumice composite coatings with microhardness as high as 540 HK are obtained at the lowest applied current density. In the electrodeposited Ni-pumice coatings, the grain size of Ni increases with the applied current density. The overall intensity of texture development is slightly stronger for the Ni-pumice composite coating compared to plain Ni coating and the texture evolution is possibly not the strongest deciding factor for the enhanced properties of Ni-pumice coatings. The wear and oxidation resistances of Ni-pumice coating are commensurate with that of Ni-SiC coating electrodeposited under similar conditions. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Following rising demands in positioning with GPS, low-cost receivers are becoming widely available; but their energy demands are still too high. For energy efficient GPS sensing in delay-tolerant applications, the possibility of offloading a few milliseconds of raw signal samples and leveraging the greater processing power of the cloud for obtaining a position fix is being actively investigated. In an attempt to reduce the energy cost of this data offloading operation, we propose Sparse-GPS(1): a new computing framework for GPS acquisition via sparse approximation. Within the framework, GPS signals can be efficiently compressed by random ensembles. The sparse acquisition information, pertaining to the visible satellites that are embedded within these limited measurements, can subsequently be recovered by our proposed representation dictionary. By extensive empirical evaluations, we demonstrate the acquisition quality and energy gains of Sparse-GPS. We show that it is twice as energy efficient than offloading uncompressed data, and has 5-10 times lower energy costs than standalone GPS; with a median positioning accuracy of 40 m.
Resumo:
The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
Graphene layers have been transferred directly on to paper without any intermediate layers to yield G-paper. Resistive gas sensors have been fabricated using strips of G-paper. These sensors achieved a remarkable lower limit of detection of similar to 300 parts per trillion (ppt) for NO2, which is comparable to or better than those from other paper-based sensors. Ultraviolet exposure was found to dramatically reduce the recovery time and improve response times. G-paper sensors are also found to be robust against minor strain, which was also found to increase sensitivity. G-paper is expected to enable a simple and inexpensive low-cost flexible graphene platform
Resumo:
In this work, we propose an algorithm for optical flow estimation using Approximate Nearest Neighbor Fields (ANNF). Proposed optical flow estimation algorithm consists of two steps, flow initialization using ANNF maps and cost filtering. Flow initialization is done by computing the ANNF map using FeatureMatch between two consecutive frames. The ANNF map obtained represents a noisy optical flow, which is refined by making use of superpixels. The best flow associated with each superpixel is computed by optimizing a cost function. The proposed approach is evaluated on Middlebury and MPI-Sintel optical flow dataset and is found to be comparable with the state of the art methods for optical flow estimation.
Resumo:
This paper presents our work on developing an automated micro positioner and a low cost disposable dispenser module having a disposable dispenser core. The dispenser core is made up of Polydimethylsiloxane (PDMS). Once the user specifies the dispensing location in the Graphical User Interface (GUI), the movement of the micropositioner is automatic. The design, fabrication and characterization results of the dispenser module are also presented. The dispensing experiments are performed with Di-Ethanol Amine as the working reagent. The minimum dispensed volume achieved is about 4 nL.
Resumo:
The history of computing in India is inextricably intertwined with two interacting forces: the political climate determined by the political party in power) and the government policies mainly driven by the technocrats and bureaucrats who acted within the boundaries drawn by the political party in power. There were four break points (which occurred in 1970, 1978, 1991 and 1998) that changed the direction of the development of computers and their applications. This article explains why these breaks occurred and how they affected the history of computing in India.
Resumo:
One new homoleptic Bi(dtc)(3)] (1) (dtc = 4-hydroxypiperdine dithiocarbamate) has been synthesized and characterized by microanalysis, IR, UV-Vis, H-1 and C-13 spectroscopy and X-ray crystallography. The photoluminescence spectrum for the compound in DMSO solution was recorded. The crystal structure of 1 displayed distorted octahedral geometry around the Bi(III) center bonded through sulfur atoms of the dithiocarbamate ligands. TGA indicates that the compound decomposes to a Bi and Bi-S phase system. The Bi and Bi-S obtained from decomposition of the compound have been characterized by pXRD, EDAX and SEM. Solvothermal decomposition of 1 in the absence and presence of two different capping agents yielded three morphologically different Bi2S3 systems which were deployed as counter-electrode in dye-sensitized solar cells (DSSCs). (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the experimental results for an attractive control scheme implementation using an 8 bit microcontroller. The power converter involved is a 3 phase full controlled bridge rectifier. A single quadrant DC drive has been realized and results have been presented for both open and closed loop implementations.
Resumo:
In geographical forwarding of packets in a large wireless sensor network (WSN) with sleep-wake cycling nodes, we are interested in the local decision problem faced by a node that has ``custody'' of a packet and has to choose one among a set of next-hop relay nodes to forward the packet toward the sink. Each relay is associated with a ``reward'' that summarizes the benefit of forwarding the packet through that relay. We seek a solution to this local problem, the idea being that such a solution, if adopted by every node, could provide a reasonable heuristic for the end-to-end forwarding problem. Toward this end, we propose a local relay selection problem consisting of a forwarding node and a collection of relay nodes, with the relays waking up sequentially at random times. At each relay wake-up instant, the forwarder can choose to probe a relay to learn its reward value, based on which the forwarder can then decide whether to stop (and forward its packet to the chosen relay) or to continue to wait for further relays to wake up. The forwarder's objective is to select a relay so as to minimize a combination of waiting delay, reward, and probing cost. The local decision problem can be considered as a variant of the asset selling problem studied in the operations research literature. We formulate the local problem as a Markov decision process (MDP) and characterize the solution in terms of stopping sets and probing sets. We provide results illustrating the structure of the stopping sets, namely, the (lower bound) threshold and the stage independence properties. Regarding the probing sets, we make an interesting conjecture that these sets are characterized by upper bounds. Through simulation experiments, we provide valuable insights into the performance of the optimal local forwarding and its use as an end-to-end forwarding heuristic.