996 resultados para Entity framework


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of ``fair'' scheduling the resources to one of the many mobile stations by a centrally controlled base station (BS). The BS is the only entity taking decisions in this framework based on truthful information from the mobiles on their radio channel. We study the well-known family of parametric alpha-fair scheduling problems from a game-theoretic perspective in which some of the mobiles may be noncooperative. We first show that if the BS is unaware of the noncooperative behavior from the mobiles, the noncooperative mobiles become successful in snatching the resources from the other cooperative mobiles, resulting in unfair allocations. If the BS is aware of the noncooperative mobiles, a new game arises with BS as an additional player. It can then do better by neglecting the signals from the noncooperative mobiles. The BS, however, becomes successful in eliciting the truthful signals from the mobiles only when it uses additional information (signal statistics). This new policy along with the truthful signals from mobiles forms a Nash equilibrium (NE) that we call a Truth Revealing Equilibrium. Finally, we propose new iterative algorithms to implement fair scheduling policies that robustify the otherwise nonrobust (in presence of noncooperation) alpha-fair scheduling algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a formidable challenge to arrange tin nanoparticles in a porous matrix for the achievement of high specific capacity and current rate capability anode for lithium-ion batteries. This article discusses a simple and novel synthesis of arranging tin nanoparticles with carbon in a porous configuration for application as anode in lithium-ion batteries. Direct carbonization of synthesized three-dimensional Sn-based MOF: K2Sn2(1,4-bdc)(3)](H2O) (1) (bdc = benzenedicarboxylate) resulted in stabilization of tin nanoparticles in a porous carbon matrix (abbreviated as Sn@C). Sn@C exhibited remarkably high electrochemical lithium stability (tested over 100 charge and discharge cycles) and high specific capacities over a wide range of operating currents (0.2-5 Ag-1). The novel synthesis strategy to obtain Sn@C from a single precursor as discussed herein provides an optimal combination of particle size and dispersion for buffering severe volume changes due to Li-Sn alloying reaction and provides fast pathways for lithium and electron transport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using first principles calculations, we show that the storage capacity as well as desorption temperature of MOFs can be significantly enhanced by decorating pyridine (a common linker in MOFs) by metal atoms. The storage capacity of metal-pyridine complexes are found to be dependent on the type of decorating metal atom. Among the 3d transition metal atoms, Sc turns out to be the most efficient storing unto four H-2 molecules. Most importantly, Sc does not suffer dimerisation on the surface of pyridine, keeping the storage capacity of every metal atom intact. Based on these findings, we propose a metal-decorated pyridine-based MOFs, which has potential to meet the required H-2 storage capacity for vehicular usage. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pressure-induced phase transformations (PIPTs) occur in a wide range of materials. In general, the bonding characteristics, before and after the PIPT, remain invariant in most materials, and the bond rearrangement is usually irreversible due to the strain induced under pressure. A reversible PIPT associated with a substantial bond rearrangement has been found in a metal-organic framework material, namely tmenH(2)]Er(HCOO)(4)](2) (tmenH(2)(2+) = N,N,N',N'-tetramethylethylenediammonium). The transition is first-order and is accompanied by a unit cell volume change of about 10%. High-pressure single-crystal X-ray diffraction studies reveal the complex bond rearrangement through the transition. The reversible nature of the transition is confirmed by means of independent nanoindentation measurements on single crystals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a framework for realizing arbitrary instruction set extensions (IE) that are identified post-silicon. The proposed framework has two components viz., an IE synthesis methodology and the architecture of a reconfigurable data-path for realization of the such IEs. The IE synthesis methodology ensures maximal utilization of resources on the reconfigurable data-path. In this context we present the techniques used to realize IEs for applications that demand high throughput or those that must process data streams. The reconfigurable hardware called HyperCell comprises a reconfigurable execution fabric. The fabric is a collection of interconnected compute units. A typical use case of HyperCell is where it acts as a co-processor with a host and accelerates execution of IEs that are defined post-silicon. We demonstrate the effectiveness of our approach by evaluating the performance of some well-known integer kernels that are realized as IEs on HyperCell. Our methodology for realizing IEs through HyperCells permits overlapping of potentially all memory transactions with computations. We show significant improvement in performance for streaming applications over general purpose processor based solutions, by fully pipelining the data-path. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human Leukocyte Antigen (HLA) plays an important role, in presenting foreign pathogens to our immune system, there by eliciting early immune responses. HLA genes are highly polymorphic, giving rise to diverse antigen presentation capability. An important factor contributing to enormous variations in individual responses to diseases is differences in their HLA profiles. The heterogeneity in allele specific disease responses decides the overall disease epidemiological outcome. Here we propose an agent based computational framework, capable of incorporating allele specific information, to analyze disease epidemiology. This framework assumes a SIR model to estimate average disease transmission and recovery rate. Using epitope prediction tool, it performs sequence based epitope detection for a given the pathogenic genome and derives an allele specific disease susceptibility index depending on the epitope detection efficiency. The allele specific disease transmission rate, that follows, is then fed to the agent based epidemiology model, to analyze the disease outcome. The methodology presented here has a potential use in understanding how a disease spreads and effective measures to control the disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical emission from emitters strongly interacting among themselves and also with other polarizable matter in close proximity has been approximated by emission from independent emitters. This is primarily due to our inability to evaluate the self-energy matrices and radiative properties of the collective eigenstates of emitters in heterogeneous ensembles. A method to evaluate self-energy matrices that is not limited by the geometry and material composition is presented to understand and exploit such collective excitations. Numerical evaluations using this method are used to highlight the significant differences between independent and the collective modes of emission in nanoscale heterostructures. A set of N Lorentz emitters and other polarizable entities is used to represent the coupled system of a generalized geometry in a volume integral approach. Closed form relations between the Green tensors of entity pairs in free space and their correspondents in a heterostructure are derived concisely. This is made possible for general geometries because the global matrices consisting of all free-space Green dyads are subject to conservation laws. The self-energy matrix can then be assembled using the evaluated Green tensors of the heterostructure, but a decomposition of its components into their radiative and nonradiative decay contributions is nontrivial. The relations to compute the observables of the eigenstates (such as quantum efficiency, power/energy of emission, radiative and nonradiative decay rates) are presented. A note on extension of this method to collective excitations, which also includes strong interactions with a surface in the near-field, is added. (C) 2014 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for intertask synchronization. For example, OpenMP 4.0 will integrate data-driven execution semantics derived from the StarSs research language. Compared to the more restrictive data-parallel and fork-join concurrency models, the advanced features being introduced into task-parallelmodels in turn enable improved scalability through load balancing, memory latency hiding, mitigation of the pressure on memory bandwidth, and, as a side effect, reduced power consumption. In this article, we develop a systematic approach to compile loop nests into concurrent, dynamically constructed graphs of dependent tasks. We propose a simple and effective heuristic that selects the most profitable parallelization idiom for every dependence type and communication pattern. This heuristic enables the extraction of interband parallelism (cross-barrier parallelism) in a number of numerical computations that range from linear algebra to structured grids and image processing. The proposed static analysis and code generation alleviates the burden of a full-blown dependence resolver to track the readiness of tasks at runtime. We evaluate our approach and algorithms in the PPCG compiler, targeting OpenStream, a representative dataflow task-parallel language with explicit intertask dependences and a lightweight runtime. Experimental results demonstrate the effectiveness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metal-organic frameworks (MOFs) and boron nitride both possess novel properties, the former associated with microporosity and the latter with good mechanical properties. We have synthesized composites of the imidazolate based MOF, ZIF-8, and few-layer BN in order to see whether we can incorporate the properties of both these materials in the composites. The composites so prepared between BN nanosheets and ZIF-8 have compositions ZIF-1BN, ZIF-2BN, ZIF-3BN and similar to ZIF-4BN. The composites have been characterized by PXRD, TGA, XPS, electron microscopy, IR, Raman and solid state NMR spectroscopy. The composites possess good surface areas, the actual value decreasing only slightly with the increase in the BN content. The CO2 uptake remains nearly the same in the composites as in the parent ZIF-8. More importantly, the addition of BN markedly improves the mechanical properties of ZIF-8, a feature that is much desired in MOFs. Observation of microporous features along with improved mechanical properties in a MOF is indeed noteworthy. Such manipulation of properties can be profitably exploited in practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.