20 resultados para PARTITIONS
em Indian Institute of Science - Bangalore - Índia
Resumo:
A recent work obtained closed-form solutions to the.problem of optimally grouping a multi-item inventory into subgroups with a common order cycle per group, when the distribution by value of the inventory could be described by a Pareto function. This paper studies the sensitivity of the optimal subgroup boundaries so obtained. Closed-form expressions have been developed to find intervals for the subgroup boundaries for any given level of suboptimality. Graphs have been provided to aid the user in selecting a cost-effective level of aggregation and choosing appropriate subgroup boundaries for a whole range of inventory distributions. The results of sensitivity analyses demonstrate the availability of flexibility in the partition boundaries and the cost-effectiveness of any stock control system through three groups, and thus also provide a theoretical support to the intuitive ABC system of classifying the items.
Resumo:
In this paper we analyze a deploy and search strategy for multi-agent systems. Mobile agents equipped with sensors carry out search operation in the search space. The lack of information about the search space is modeled as an uncertainty density distribution over the space, and is assumed to be known to the agents a priori. In each step, the agents deploy themselves in an optimal way so as to maximize per step reduction in the uncertainty density. We analyze the proposed strategy for convergence and spatial distributedness. The control law moving the agents has been analyzed for stability and convergence using LaSalle's invariance principle, and for spatial distributedness under a few realistic constraints on the control input such as constant speed, limit on maximum speed, and also sensor range limits. The simulation experiments show that the strategy successfully reduces the average uncertainty density below the required level.
Resumo:
Splittings of a free group correspond to embedded spheres in the 3-manifold M = # (k) S (2) x S (1). These can be represented in a normal form due to Hatcher. In this paper, we determine the normal form in terms of crossings of partitions of ends corresponding to normal spheres, using a graph of trees representation for normal forms. In particular, we give a constructive proof of a criterion determining when a conjugacy class in pi (2)(M) can be represented by an embedded sphere.
Resumo:
This paper addresses the problem of multiagent search in an unknown environment. The agents are autonomous in nature and are equipped with necessary sensors to carry out the search operation. The uncertainty, or lack of information about the search area is known a priori as a probability density function. The agents are deployed in an optimal way so as to maximize the one step uncertainty reduction. The agents continue to deploy themselves and reduce uncertainty till the uncertainty density is reduced over the search space below a minimum acceptable level. It has been shown, using LaSalle’s invariance principle, that a distributed control law which moves each of the agents towards the centroid of its Voronoi partition, modified by the sensor range leads to single step optimal deployment. This principle is now used to devise search trajectories for the agents. The simulations were carried out in 2D space with saturation on speeds of the agents. The results show that the control strategy per step indeed moves the agents to the respective centroid and the algorithm reduces the uncertainty distribution to the required level within a few steps.
Resumo:
This letter deals with a three‐dimensional analysis of circular sectors and annular segments resulting from the partitioning of a round (cylindrical) duct for use in an active noise control system. The relevant frequency equations are derived for stationary medium and solved numerically to arrive at the cut‐on frequencies of the first few modes. The resultant table indicates among other things that azimuthal partitioning does not raise the cutoff frequency (the smallest cut‐on frequency) beyond a particular value, and that radial partitioning is counterproductive in that respect.
Resumo:
Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.
Resumo:
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.
Resumo:
CMPs enable simultaneous execution of multiple applications on the same platforms that share cache resources. Diversity in the cache access patterns of these simultaneously executing applications can potentially trigger inter-application interference, leading to cache pollution. Whereas a large cache can ameliorate this problem, the issues of larger power consumption with increasing cache size, amplified at sub-100nm technologies, makes this solution prohibitive. In this paper in order to address the issues relating to power-aware performance of caches, we propose a caching structure that addresses the following: 1. Definition of application-specific cache partitions as an aggregation of caching units (molecules). The parameters of each molecule namely size, associativity and line size are chosen so that the power consumed by it and access time are optimal for the given technology. 2. Application-Specific resizing of cache partitions with variable and adaptive associativity per cache line, way size and variable line size. 3. A replacement policy that is transparent to the partition in terms of size, heterogeneity in associativity and line size. Through simulation studies we establish the superiority of molecular cache (caches built as aggregations of molecules) that offers a 29% power advantage over that of an equivalently performing traditional cache.
Resumo:
The hot deformation characteristics of alpha-zirconium in the temperature range of 650 °C to 850 °C and in the strain-rate range of 10-3 to 102 s-1 are studied with the help of a power dissipation map developed on the basis of the Dynamic Materials Model.[7,8,9] The processing map describes the variation of the efficiency of power dissipation (η =2m/m + 1) calculated on the basis of the strain-rate sensitivity parameter (m), which partitions power dissipation between thermal and microstructural means. The processing map reveals a domain of dynamic recrystallization in the range of 730 °C to 850 °C and 10−2 to 1−1 with its peak efficiency of 40 pct at 800 °C and 0.1 s-1 which may be considered as optimum hot-working parameters. The characteristics of dynamic recrystallization are similar to those of static recrystallization regarding the sigmoidal variation of grain size (or hardness) with temperature, although the dynamic recrystallization temperature is much higher. When deformed at 650 °C and 10-3 s-1 texture-induced dynamic recovery occurred, while at strain rates higher than 1 s-1, alpha-zirconium exhibits microstructural instabilities in the form of localized shear bands which are to be avoided in processing.
Resumo:
A fairly comprehensive computer program incorporating explicit expressions for the four-pole parameters of concentric-tube resonators, plug mufflers, and three-duct cross-flow perforated elements has been used for parametric studies. The parameters considered are hole diameter, the center-to-center distance between consecutive holes (which decides porosity), the incoming mean flow Mach number, the area expansion ratio, the number of partitions of chambers within a given overall shell length, and the relative lengths of these partitions or chambers, all normalized with respect to the exhaust pipe diameter. Transmission loss has been plotted as a function of a normalized frequency parameter. Additionally, the effect of the tail pipe length on insertion loss for an anechoic source has also been studied. These studies have been supplemented by empirical expressions for the normalized static pressure drop for different types of perforated-element mufflers developed from experimental observations.
Resumo:
Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.
Resumo:
In this paper we explore an implementation of a high-throughput, streaming application on REDEFINE-v2, which is an enhancement of REDEFINE. REDEFINE is a polymorphic ASIC combining the flexibility of a programmable solution with the execution speed of an ASIC. In REDEFINE Compute Elements are arranged in an 8x8 grid connected via a Network on Chip (NoC) called RECONNECT, to realize the various macrofunctional blocks of an equivalent ASIC. For a 1024-FFT we carry out an application-architecture design space exploration by examining the various characterizations of Compute Elements in terms of the size of the instruction store. We further study the impact by using application specific, vectorized FUs. By setting up different partitions of the FFT algorithm for persistent execution on REDEFINE-v2, we derive the benefits of setting up pipelined execution for higher performance. The impact of the REDEFINE-v2 micro-architecture for any arbitrary N-point FFT (N > 4096) FFT is also analyzed. We report the various algorithm-architecture tradeoffs in terms of area and execution speed with that of an ASIC implementation. In addition we compare the performance gain with respect to a GPP.
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. The efficiency of a cache for this application critically depends on the placement function to reduce conflict misses. Traditional placement functions use a one-level mapping that naively partitions trie-nodes into cache sets. However, as a significant percentage of trie nodes are not useful, these schemes suffer from a non-uniform distribution of useful nodes to sets. This in turn results in increased conflict misses. Newer organizations such as variable associativity caches achieve flexibility in placement at the expense of increased hit-latency. This makes them unsuitable for L1 caches.We propose a novel two-level mapping framework that retains the hit-latency of one-level mapping yet incurs fewer conflict misses. This is achieved by introducing a secondlevel mapping which reorganizes the nodes in the naive initial partitions into refined partitions with near-uniform distribution of nodes. Further as this remapping is accomplished by simply adapting the index bits to a given routing table the hit-latency is not affected. We propose three new schemes which result in up to 16% reduction in the number of misses and 13% speedup in memory access time. In comparison, an XOR-based placement scheme known to perform extremely well for general purpose architectures, can obtain up to 2% speedup in memory access time.
Resumo:
This paper reports on our study of the edge of the 2/5 fractional quantum Hall state, which is more complicated than the edge of the 1/3 state because of the presence of edge sectors corresponding to different partitions of composite fermions in the lowest two Lambda levels. The addition of an electron at the edge is a nonperturbative process and it is not a priori obvious in what manner the added electron distributes itself over these sectors. We show, from a microscopic calculation, that when an electron is added at the edge of the ground state in the [N(1), N(2)] sector, where N(1) and N(2) are the numbers of composite fermions in the lowest two Lambda levels, the resulting state lies in either [N(1) + 1, N(2)] or [N(1), N(2) + 1] sectors; adding an electron at the edge is thus equivalent to adding a composite fermion at the edge. The coupling to other sectors of the form [N(1) + 1 + k, N(2) - k], k integer, is negligible in the asymptotically low-energy limit. This study also allows a detailed comparison with the two-boson model of the 2/5 edge. We compute the spectral weights and find that while the individual spectral weights are complicated and nonuniversal, their sum is consistent with an effective two-boson description of the 2/5 edge.